Unwrapping the QA Onion: An Approach to Quality in Software Modernization by Jennifer Parker

How can we de-risk software modernization effectively? What are some mechanisms that we can put in place to ensure a high quality refactor?

The key is to employ a multi-faceted approach to testing. With software of any significant size and complexity, it’s impractical to manually test every use-case, edge-case, and race condition that exists. Add organic growth over time to the mix and you have a recipe for a tangled codebase that has such a high cyclomatic complexity that it’s almost impossible to reason about.

As enterprises start to re-platform and modernize their solutions, how can we help to mitigate some of the risk that can amount to unreliable, buggy software?

One way is through a testing strategy that encompasses several methodologies to create a wholistic approach to system health. Typically this is referred to as the test pyramid model and states that you should aim for many unit tests, fewer integration tests and even fewer functional tests.

The pyramid I included here is from Software Testing Help. It describes the ROI and relationship between all the different automation test methods and can guide enterprises into choosing the right balance based on their needs and the complexity of their system.

https://www.softwaretestinghelp.com/the-difference-between-unit-integration-and-functional-testing/

Unit Testing

Unit tests are low level and test one method or function at a time. The intent is to isolate the method from any dependencies and only test the logic within the method.

On the back-end, if we invest a bit of time in creating unit tests that test the boundary conditions, input and expected outputs in our service layer methods, the team will have high confidence that critical business logic is operating as expected. As new features are developed or code is refactored, re-running these unit tests can uncover bugs early. The earlier that bugs are uncovered, the cheaper they are to address and fix.

Because we’re testing only one unit and isolating all dependencies during this testing phase, we use a technique called dependency injection and mocking. It’s important that the code is setup with dependency injection and mocking capabilities at the onset. Otherwise, we won’t be able to test in isolation and will end up calling through the various dependencies. An example of this is testing a service-layer method that would traditionally call a DB repository method to perform operations against a database. In a unit test for the service layer, we don’t want to test the DB repository so we pass a mocked out instance of the DB repository to the service layer.

Unit tests are supposed to be fast enough to execute during the CI/CD pipeline. They should be testable without needing to test out the DB storage or any other physical storage.

On the front-end, if we are creating re-usable components, writing unit tests against the functions in the component will help maintain and achieve quality as components are enhanced and shared among many different solutions.

Integration Testing

Integration testing strives to test that parts of the system function appropriately together. An example of an integration test could be ensuring that something gets saved to the database from the service layer properly. It’s less about testing the business logic and more about testing the interactions in the system to make sure all dependencies are handled correctly.

Once we have a solid set of unit tests, the next thing for a team to consider is what integration tests to create. Typically, we’ll want to create tests that validate a complex method call that creates a few different database calls and possibly saves a larger object graph to the database.

Unlike Unit tests, integration tests take longer to run. It’s a good idea to run an integration test suite periodically throughout the development cycle and before promoting software to a UAT environment.

Functional Testing - AKA e2e Testing

Functional testing is testing that an entire piece of functionality works as expected against a set of requirements. This is usually done through automated the browser user actions which in turn executes any API calls in the system.

Functional testing is useful for creating a regression test suite that must pass in order to release software to production. It’s also useful for creating a smoke test suite that runs a preliminary set of tests to ensure base system health before QA does any manual testing.

Functional tests tend to be the most brittle and prone to breaking. They should be reserved for critical functionality with the expectation that they will likely break between releases as new elements get added to the DOM and components get shifted around.

Exploratory and Manual Testing

For functionality that cannot be automated or where the automation cost is too high, manual testing is still king. Some examples are things like notifications, bulk processing, file ingestions, and anything where there is a non-deterministic wait that makes automation brittle.

Insights | Take Aways

It is critical that organizations invest in the right level and types of testing for their particular needs. As an enterprises quality practices mature, they should strive to measure the effectiveness of their testing strategy with metrics around defect leakage, where in the SDLC bugs are found, and how quickly items are able to move through their pipeline. With an effective testing strategy, organizations should see an increasing number of bugs found early in the process and a decreasing trend of defect leakage to production.

Jennifer ParkerQA, Quality, Testing