Home » Posts tagged 'software testing'
Tag Archives: software testing
A question that is asked regularly in testing circles is: “When should you stop testing?”. Proponents of code coverage tools may suggest once some percentage of coverage is achieved you can stop testing. However, what do you do when the budget is severely constrained? An even more difficult situation to address is when a project starts out with a given budget, but during its lifetime the budget gets significantly reduced (i.e. due to economic downturn). This forces us to be able to provide the highest level of quality for the least amount of money. In this post I will explain how risk based testing can help to answer this question.
A mistake that is often made is that the testing effort is distributed equally across the system – both critical and non-critical parts of the system are tested equally. This results in critical parts of the system not being tested sufficiently and non-critical parts being tested to the point of diminishing returns.
A further mistaken mindset of developers is that there is no such thing as a useless test. This may entice developers to add tests for the sake of adding tests. In actual fact every test (unit-, integration- or systems test) has to earn its place in the codebase. If there is no good motivation for a test, the test must be deleted from the codebase. Why is that? Because every test adds to the volume of code that developers have to master and maintain to be productive members of the team. As such tests adds to the overall cost of maintenance of a system. The most cost effective code to maintain is the code that has never been written.
Risk based testing is a testing approach that helps developers and testers to prioritize the testing effort by assigning a relative risk to each software component.
Calculating the Relative Risk of a Software Component
The relative risk of a software component is based on the product of the relative risks assigned to the criteria deemed relevant for the project. Criteria for determining risk will be discussed in the next section. For each criteria a value is assigned between 1..X, where 1 indicates that the relative risk of a criterion is insignificant and X indicates that the relative risk of the criterion is critical for the given software component.
Assuming we have software components (S) 1..m, criteria (C) 1..n where we assign relative risks, Ai,1 .. Ai,n for each criterion of a software component Si, the relative risk Ri for each software component is given by:
R1 = A1,1 * … * A1,n
Rm = Am,1 * … * Am,n
Ordering R_1 … R_m in descending order of their values will cause the software components with the highest relative risk to be at the top of the list. Note that the calculated relative risk for a specific software component by itself is meaningless. A calculated risk of an component is only meaningful if it can be compared to the calculated risk of another component calculated in the same way (that is, using the same criteria and value for X). That is why we refer to the relative risk of a software component.
What value should you choose for X? In general you should choose X to be equal or greater than the number of criteria you will be using in calculating the risk. If many of the calculated relative risks R_i have the same value, it means you have to increase the value of X.
What Criteria should be Used?
In this section we give examples of criteria that can be used in the calculating the risk of each software component. However, these criteria merely serve as examples. You are free to choose whatever criteria are deemed of importance for your project.
Some general criteria that you may want to consider using are:
- Frequency of use: How frequently is this software component used? The more often a software component is used, the higher is its potential for having an adverse effect when it fails.
- Cost: What will be the cost of this software component failing? The cost can be monetary, but it does not have to be. It can be cost in terms of reputation, loss of clients, or whatever factor that can hurt the business. Here it may be a good idea to get the input of the project sponsor or project owner.
- Complexity: What is the complexity of this software component? If this software component forms part of a green fields project, this criteria can express the complexity of this software component in comparison with other newly developed components. In the case where enhancements are made to an existing system, code analysis tools can be used to determine complexity of software components that need to be changed. For example, when an enhancement requires that 2 classes need to be changed substantially, the class with a higher cyclomatic complexity (the number of linear independent paths through the code) represents a higher risk.
- Frequency of change: How often will the software component be changed? If a software component realizes the core value proposition of a business, it is likely that this component will constantly need to be enhanced as the business is trying to stay competitive. If there is little chance that a software component is going to be changed after it has been completed, the tests for that software component may be of limited value in the long term.
- Number of bugs: For a system that is already in production, the number of bugs that are logged (that can be related back to specific software components) can help give an indication as to where testing effort should be focused.
As an example, let us assume we are working on a legacy online share trading system, naturally with zero automated tests. As with most legacy systems, a large number of users depend on the system. A steady stream of user requests for enhancements and bugs logged require the system to be frequently updated. These changes often introduce new bugs.
We further assume the system consists of the following modules:
- A user management, authentication and authorization module through which users and their permissions are managed. It is very seldom that any changes are required to this module.
- A share price notification engine of which the main purpose is to notify users in realtime of changes in share prices. The correct functioning of this engine is critical to the business since it enables users to buy/sell shares as and when needed. Any failure of the engine could mean that users cannot buy/sell shares as needed which could result in massive losses. However, since the initial problems with the engine have been resolved, bugs are seldomly logged for this module.
- The account management system keeps track of the funds of users as they trade, calculates daily interest earned or charged (a positive balance earns interest and for a negative balance interest is charged) and generate monthly statements of transactions.
- The mark-to-market process runs each day after the stock exchange has closed to calculate the gains/losses for each account. There are frequent changes to this process as quants are forever fine tuning the way commission is calculated on trades. Most of the bugs logged are related to this module.
To improve the situation, management wants the development team to add automated tests. How should we approach testing the system?
If we apply risk based testing as in seen in Table 1, it is clear that we need to focus our testing efforts on the mark-to-market process. Let us further assume the mark-to-market process consists of the following steps for each account:
- determine prices at which shares have been bought,
- determine current prices of shares held,
- calculate the profit/loss, and
- update the account balance.
A detailed plan of how to approach testing of the mark-to-market process can be drawn up (see Table 2), but instead of listing modules, it lists the software components that the mark-to-market process consists of. These can for example be classes, methods, functions, scripts etc. Table 2 indicates that we need to particularly focus our attention on the calculateProfitLoss software component.
Advantages/Disadvantages of Risk Based Testing
The advantages of risk based testing are:
- The highest risk items can be developed and tested first which reduces the overall risk on the project.
- If testing has to be watered down, a guideline exists for deciding what to test and what not to test.
- At any given time an indication can be given of the risk of the project by considering the highest risk use cases that has not been tested.
- It is a valuable tool for communicating risk to management, project sponsors and project owners.
The potential disadvantages of risk based testing are:
- With a risk based testing approach many parts of the system will go untested. However, this will be an intentional decision rather than an accidental one.
- The relative risk value, that are assigned for each criterion of a software component, is likely to be subjective. That means that another person may assign a different relative risk value for a criterion of a software component. However, it is just as subjective as, for example, story points in agile methodology. It similarly makes sense to assign relative risk values as part of a team discussion.
- Some teams feel risk based testing adds to their documentation load. That is, they now have to draw up a complete risk profile before they can start testing. Do I always draw up risk profiles for all my projects? Honestly? No. What I will do is to have a discussion with the team regarding what areas of the system we need to test to death and which areas we can skimp on. If there is some disagreement, then I may write it out. Writing it out is very useful when the testing approach needs to be discussed with management, the project sponsor or the project owner.
This post explained how risk based testing can help to ensure that the testing effort is focused where it will bring the most business value.
Some developers who are passionate about testing are sometimes also fanatical advocates of Test Driven Development (TDD), to the point that they believe TDD is the only correct way to test applications. Strict adherence to TDD requires a test to be created first before any code has been written. As such proponents of TDD tend to see TDD as a design mechanism rather than as a test mechanism. Irrespective of whether TDD is used as a design and/or test mechanism, it has a number of pitfalls.
In this post I will:
- give a brief overview of the TDD cycle,
- explain why TDD as software design mechanism is inadequate,
- explain why TDD as software test mechanism is inadequate,
- explain the conditions under which TDD can be applied successfully to projects.
In TDD the steps to add new code are as follows :
- Add a (failing) test that serves as a specification for the new functionality to be added.
- Run all the tests and confirm that the newly added test fails.
- Implement the new functionality.
- Run all tests to confirm that they all succeed.
- Refactor the code to remove any duplication.
The aim of TDD is to provide a simple way to grow the design of complex software one decision at a time.
TDD as a Design Mechanism
From a design perspective TDD emphasizes mirco level design rather than macro level design . The weaknesses of this approach are:
- TDD forces a developer to necessarily focus on a single interface. This often neglects the interaction between interfaces and leads to poor abstraction. Bertrand Meyer gives a balanced review of the challenges regarding TDD and Agile in this regard .
- Quality attributes (like performance, scalability, integrability, security, etc.) are easily overlooked with TDD. In the context of Agile, where an upfront architecture effort is typically frowned upon, TDD is particularly dangerous due to poor consideration of quality attributes.
TDD as a Testing Mechanism
From a testing perspective TDD has the following drawbacks:
- Interfaces/classes may be polluted in order to make them testable. A typical example is that a private method is made public in order to test it. This obfuscates the intended use of the class which will cause developers to more easily digress from the intended use of the class. In the long term this creates an unmaintainable system.
- Often when TDD is used on projects, unit testing is used to exclusion, with limited or no regard for the broad spectrum of testing, which should at least include integration testing and systems testing.
- TDD has no appreciation for the prioritization of the testing effort: Equal amounts of effort are expended to test all code irrespective of the associated risk profile. Typically TDD expects all further development to be blocked until all tests pass. This ignores the reality that some functionality has greater business value than others.
Guidelines for using TDD Effectively on Projects
TDD can be used with no side effect if the following process is adhered to:
- The architecture has to be complete, which should include details as to how testability of the system at all levels (unit-, integration- and systems testing) will be achieved.
- The risk profile of the sub system (module/use case) has been established, which informs the testing effort that must be expended on the sub system.
- The design for the sub system (module/use case) should be complete in adherence to the architecture and risk profile. Since testability is considered as part of the architecture, and the design is informed by the architecture, the design should be by definition testable.
- Only at this point is the developer now free to follow a TDD approach in implementing the code.
The very positive thing that TDD emphasizes is the need for testing. However, to naively embrace TDD, is to do it at the peril of your project.
- Kent Beck, Test-Driven Development by Example, The Addison-Wesley Signature Series, Addison-Wesley, 2003.
- Cédric Beust and Hani Suleiman, Next Generation Java Testing : TestNG and Advanced Concepts, Addison-Wesley, Upper Saddle River, NJ, 2008.
- Bertrand Meyer, Agile!: The Good, the Hype and the Ugly, Springer, 2014.