Every tester?s been there: the code freeze is in, release day is looming, and there are more tests to execute than there are hours in the day. And that?s assuming the test cases needed for rigorous testing have been identified and created in the first place, which opens up a whole other can of worms. So, how can you rigorously test without spending your entire weekend doing it?
Today?s Systems Are Too Complex to Test Manually, and Testing Windows Are Too Short
There?s no way to execute every possible test to completely cover today?s systems. For example, a system with 32 nodes (logic points) and 62 edges (decisions) will have a whopping 1,073,741,824 possible routes through it. Based on estimated test execution time, this would require 34 years of testing! No amount of extra-long nights and weekend work can ever close that type of gap.
Testers often attempt to reduce the number of tests by using equivalence partitioning. However, to reliably partition a system requires an understanding of all the logic that needs to be tested, and ideally a fully functional description of the system. What?s more, tests still need to be created for each partition, bringing you back to the challenge of identifying and creating an optimal set of tests when faced with huge complexity.
The complexity of modern applications means that even if the number of tests is reduced, there will still be more than can feasibly be executed manually. For fun, we recently created an approximate possible route model of Pokémon Go and estimated that over 107 million paths are needed just to cover a high-level flow with the subprocesses optimized down. No wonder kids (and a fair number of grownups, too) are finding so many glitches!
Why Don?t We Just Automate Test Execution?
Automating test execution is a good place to start when moving to rigorous testing within the confines of a sprint. It drastically shortens one of the slowest aspects of testing. However, it is not a complete solution.
Even the best automation frameworks tend to bring you back to manual test creation, in the form of either script creation or keyword selection. The time spent converting test cases to automated tests often outweighs the time saved executing them, and maintenance can create an additional bottleneck.
When the system or requirements change, brittle automated tests must be updated. Otherwise, you risk automated test failures and wasteful over-testing. This time spent identifying the impact of a change on tests and then updating them can have a huge impact on speed and quality or your application.
How to Ensure Quality When You Have Too Many Tests and Not Enough Time
The development team I work with releases code every 4-6 weeks. They use a method we call ?active? flowchart modeling for rigorous functional testing. This approach ties automatically generated tests and data directly to an easy-to-maintain model of the system.
First, all the known logic of a system is modeled, usually using subject matter expertise, existing test cases and requirements. The flowchart model then serves as a mathematically precise directed graph, meaning that all possible paths through the modeled logic can be identified and created automatically.
The paths are equivalent to tests that can also be optimized automatically, to reduce the total number while still covering every logically distinct combination. Numerous established algorithms exist for this, and CA Agile Requirements Designer offers All Pairs, All In/Out Edges, All Edges, and All Nodes optimization, as well as Risk-Based approaches.
Partitioning is possible with subflows, but the fundamental goal of this approach is different: we are trying to cover all the logically distinct combinations in the entire system, rather than testing just a subset of that logic. Quality can thereby be assured while still reducing the total number of tests.
Test execution and test data allocation can also be automated without requiring time-consuming, manual scripting. A reusable automation configuration file is assigned to a flow, mapping automated code snippets or keywords to actions and objects. Dynamic or static data can further be attributed to the nodes of the flowchart, so that a fully executable, automated test pack is compiled when the optimized tests are created.
In this approach, testing is not only automated, but can also react to change. When the flowchart is updated, all the test assets that are traceable to it are likewise updated automatically, so that the effort of testing a change is equivalent to updating the model.
Let the Machine Share the Stress of End-of-Sprint Testing
This approach has been proven to end hero culture and the 4 AM scrambles to execute all testing before the release date. The time spent modeling the initial flowcharts is quickly outweighed by the time saved on slow, manual test creation and execution, as well as manual script generation, test maintenance, and test data allocation.
The number one time-saver comes when the system changes because you can update the regression test pack in minutes by updating the flow. As the test lead of the development team I mentioned earlier described to me, ?We are now performing more regression testing in less time, while the automation suite has replaced the stress and errors that used to come at the end of the sprint.?
What are your thoughts on model-based testing?
Want to learn more about Continuous Testing solutions? Join us at #CAWORLD. Find out more at http://cainc.to/zbZ3w3.