The purpose of business risk analysis during software testing is identifying high-risk applications, which must be tested more thoroughly, and to identify error-prone components within specific applications which must be tested more rigorously. The results of the analysis can be used to determine the testing objectives for the application under test during test planning.
The key to performing a successful risk analysis is to formalize the process. An informal approach to risk analysis methods leads an ineffective analysis process.
The first is Judgment And Instinct. This is the most common approach to performing risk assessment during testing. It is an informal approach using the tester's Knowledge, and experience with past projects, to estimate the amount of testing required for the current project. This can be an effective method, but its primary fault the fact that the tester's experience is not readily transferable to other testers. It is a repeatable approach, but is not formally written down for others to use.
The second method is Dollar Estimation. This approach quantifies business risk using dollars as its unit of measure. This approach requires a great deal of precision and is difficult to use because results are base on estimates of frequency of occurrence and the loss per occurrence. Thus, the precision is not always there.
The third approach is Identifying And Weighting Risk Attributes. This approach identifies the attributes that cause a risk to occur. The relationship of the attributes to the risk is determined by weighting. The tester uses weighted numerical scores to determine what areas are at the most risk. The scores can be used to determine what application components should be tester more thoroughly than others, and total weighted scores can be used to compare one application to another.
The fourth approach is The Software Risk Assessment Package. This is an automated approach, which involves purchasing an assessment package from a vendor. The advantage is the ease of use and the ability to do What-If-Analysis with risk characteristics. Automated risk assessment software packages exist which use the second and third approaches above, however, testers can create their own risk assessment questionnaires with MS Word and do the what-if-analysis with MS Excel. This is the approached used here.
There are three identified risk dimensions. They are Project Structure, Project Size, and Experience with Technology.
With respect to project structure, the more structured a project the less the risks. Thus, software development projects, which employ some type of project management/ development life cycle approach, should be at less risk.
Project size is directly proportional to risk. The larger the project in terms of cost, staff, time, number of functional areas involved, etc., the greater the risk.
Technology experience is inversely proportional to project risk. The more experience the development team has with the hardware, the operation system, the database, the Network, and the development language the less the risk.
Suggested risk assessment approach is a five-step risk assessment methodology. The steps are:
1. Ascertain the risk score
2. Create the risk profile
3. Modify the contributing risk characteristics
4. Allocate test resources
5. Create a risk database
Ascertaining the risk score.
This involves conducting a risk assessment review with the development team. During the session a risk assessment questionnaire is used to structure the process. Each question is asked of the group and a consensus is reached as to the perceived level of risk. The questions are closed-end questions with the possible responses of Low (one point), Medium (2 points), High (3 points), and Not Applicable (0 points). The weighted scores can be used to identify error-prone areas of the application and to compare the application with other applications.
Creating the risk profile.
A weighting system is used to compute a score that reflects the importance of each area. Areas, which are twice as important, can be weighted with a value of two (e.g. an area with medium risk (2 points) could be considered three times as important as the other areas and have a final weighted score of 6 points (the weight times the risk points)). A total score is computed for the project, but the individual scores are used to develop the risk profile. Perry is not specific is his description of how to construct the profile. So, I suggest using the following approach.
Once the risk data have been collected, create a spreadsheet, which computes the weighted scores. Sort the tabulated scores from the highest to the lowest (a pseudo-frequency distribution of sorts) and perform a Pareto Analysis (the 80/20 rule) to determine what project areas fall into the upper 20% of the distribution. These are the areas, which are at risk the most and they are the areas, which will need to be, tested the most. The results are rather obvious when charted using Kiviat Charts (radar charts). The high risk areas standout visually. The charting should be done based on data sorted in ascending order by question number not on data sorted in descending order by highest to lowest.
Modify the risk characteristics.
Perry argues that once the areas at risk have been identified a proactive approach can reduce risk. He suggests that steps be taken to change the development approach or the project structure in order to reduce risk. When these alternatives are not feasible, the process of using the risk information to decide what areas to test becomes even more critical.
Allocate test resources.
Allocating the most test resources to the high-risk areas, allocating less testing resources to medium risk areas and minimal testing resource to low risk testing areas is suggested. A sound strategy is to assure that all of, or as must of as is possible, of the medium to high risk areas are tested with the scope of the allotted testing resources. A possible split could be to commit 80% of the testing resources to medium and high-risk areas, and commit the remaining 20% to low risk areas. This is again applying the 80/20 rules.
Compile a risk assessment database.
A risk assessment database has two important functions. First, it can be used to improve the risk assessment process itself. Second, the data can be used to help management plan and structure development projects.
THE RISK ASSESSMENT REVIEW SESSION
Perry says that the entire test team along with end user representatives should be included in the session. He recommends conducting the session early in the test process. It should last no more than two hours. It should be run formally by the test team manager who facilitates the session. The session should have two objectives. The first objective is to answer each question on the risk assessment questionnaire. The second is to brainstorm and let the participants voice their concerns about the system under development.
A risk Assessment should be completed for the Ask Jeeves International projects. It was conducted following the guidelines below. Risk assessment was used to conduct a one-hour assessment of the project's risk factors. The risk assessment session was conducted with one QA person and five XYZ project development team members present.
The raw data were placed in an Excel workbook, and weighted scores were calculated for each question in each of the Test Documents. The data were analyzed using Pareto Analysis to determine the number of questions to consider (i.e the top 20% in terms of risk). The data were displayed through Kiviat charts. The results revealed three major areas of risk. The first is the lack of user documentation about the process being automated. The second is number of interfaces to other systems. The third is the use of new technology to "pioneer" parts of the system.
Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. Considerations can include:
- Which functionality is most important to the project's intended purpose?
- Which functionality is most visible to the user?
- Which functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which aspects of the application can be tested early in the development cycle?
- Which parts of the code are most complex, and thus most subject to errors?
- Which parts of the application were developed in rush or panic mode?
- Which aspects of similar/related previous projects caused problems?
- Which aspects of similar/related previous projects had large maintenance expenses?
- Which parts of the requirements and design is unclear or poorly thought out?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?
- Which tests will have the best high-risk-coverage to time-required ratio?
GENERAL RISKS LIST
- Complex - anything disproportionately large, intricate or convoluted.
- New - anything that has no history in the product.
- Changed - anything that has been tampered with or "improved".
- Upstream Dependency - anything whose failure will cause cascading failure in the rest of the system.
- Downstream Dependency - anything that is especially sensitive to failures in the rest of the system.
- Critical - anything whose failure could cause substantial damage.
- Precise - anything that must meet its requirements exactly.
- Popular - anything that will be used a lot.
- Strategic - anything that has special importance to your business, such as a feature that sets you apart from the competition.
- Third-party - anything used in the product, but developed outside the project.
- Distributed - anything spread out in time or space, yet whose elements must work together.
- Buggy - anything known to have a lot of problems.
- Recent Failure - anything with a recent history of failure.
Perry, William E. A Standard For Testing Application Software 1992. Auerbach Publishers, Boston, 1992.