We all know that it is impossible to perform exhausted testing in practice even with a small software application. The reason is that we have limited schedule time and budget while the number of tests is almost unlimited. The art of software testing is to find a right balance that maximizes the outcome of testing within a given schedule time, budget and other constraints.
In this post, we will discuss a method that helps find such a right balance.
But at first, let’s consider a few common scenarios that you as a tester may encounter. One scenario is that you have 1,000 test cases to be written and tested for a one-month iteration or sprint. Your project produces many builds during the iteration. And your testing team has 3 people, which is barely enough for testing every test case once. Because your manpower is limited, you have to decide which test cases to test for which build and which test cases have to be tested repeatedly on multiple builds. For another scenario, in a later stage of the project, your customer requests to test a build to release, and you have only 5 days to test 2,000 test cases. What would you do if you don’t have enough manpower to test all of the test cases in 5 days?
Vilfredo Pareto, an Italian economist, developed a popular principle bearing his name, the Pareto principle, by observing that roughly 80% of the land in Italy was owned by 20% of the population. This principle has been shown to be valid in many phenomena. And it has been applied widely in business and social sciences.
Using the Pareto principle, we can say that roughly 80% of software defects come from 20% of the modules or features or test cases, meaning that 20% of test cases account for 80% of the value of testing. The implication of the Pareto principle is that some test cases are more valuable than others and a small number of test cases account for the majority of the value of testing.
Given the constrained schedule time and budget, we maximize the outcome of testing by focusing on the test cases that bring the most value for customers. This becomes a problem of identifying these most valuable test cases (see figure below).
The value of testing lies much on satisfying these major goals.
We should note that these objectives are not always equally important; their importance is dependent on when testing is performed and for what purpose. When customers want to demonstrate or release the product, having confidence on it is more desirable than having defects. Detecting defect is important in the early stages while establishing confidence is crucial in the later stages of development or when teams deliver product to customers: you don’t want to have low confidence in the quality and stability of the product when sending it to customers and end users.
Prioritizing Test Cases Using Risks
Given a set of test cases, we prioritize them using their value that we achieve after performing testing. The value here is a general term referring to the number of defects detected, time reduced, effort saved, potential problems avoided, or confidence gained after testing. Prioritizing does not mean that you don’t need to test all test cases, but that you should focus more on high-priority ones, especially when you are under pressure to complete testing within short given time.
Suppose that we have a set of test cases that are linked to requirements. The dependencies among test cases are also specified.
Step 1: Determine Risk Exposure (RE) for each test case.
RE = Chance of Defect x Effect
Risk Exposure measures the level of risk of the test case. Chance of Defect is the probability of defects found when executing the test case, ranging from 0% to 100%. Effect is the incurring loss if defects go undetected. Effect can be quantified as the effort spent for handling the problem when defects were found later. Chance and Effect are subjective measures, which can be specified using our experience and judgment.
The Chance measure likely depends on the developer and the feature. If there are defects found from the feature in the previous builds then the chance that a defect is found again is high. If a developer whose code has many defects, then the test case associated with his code has a high Chance of defects.
The Effect measure is likely affected by the severity of defects and the importance of the associated functionality. Effect would be high when a serious defect occurs in a core feature, and it would be low with trivial defects.
When determining Chance we should ask the question of “what is the possibility of having a defect if executing the test case thoroughly?”, and when specifying Effect we ask the question of “how serious is the defect?”
Step 2: Calculate Risk Reduction Leverage (RRL) for each test case.
RRL = (RE before Testing – RE after Testing)/Effort
RE before Testing and RE after Testing are the risk exposures before and after the test case is exercised, respectively. RE after Testing is likely smaller than RE before Testing because the chance of finding defects is reduced after exercising test.
Effort is the amount of time spent for running test and related activities such as preparing environment for testing and reporting defects.
Step 3: Prioritize the test cases according to their RRL value. The higher is the RRL value, the more value the test case has.
Applying the Method
There are several concerns with the method that you may correctly have. One is that the method requires doing several estimates using experience, which is difficult to do correctly. It would also take much time to estimate each of thousands of fine-grained test cases in even small projects. However, we can simplify the method to make it more applicable in practice. Below are several ways:
- Ignore the risk after testing. We prioritize the test cases by only considering the risk before testing.
- Classify test cases into groups based on the associated functionality or module. You can then determine RRL of a representative test case of each group. Every test case in a group has the same priority, but you can always adjust the priority of some test cases if needed.
There are many approaches to help finding a right balance, dependent on your testing skills, experience, methods applied, and support tools. This blog post provides a useful method, giving you not only the specific way but also an idea on maximizing your testing outcome. A take away of this post is that not every test is equally important, thus treating them differently to generate the best value possible for your team and customers. To do so, you can use your instinct judgment and experience; you can also use a constructive method like the one discussed in this blog. Enjoy testing.