Test Notes

I don't make software; I make it better
posts - 78 , comments - 60 , trackbacks - 616

My Links

News

  

Please Note
The information in this weblog is provided “AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my opinion. Inappropriate comments will be deleted at the authors discretion.


Google 

Groups
SoftwareTesting
Browse Archives at
groups.google.com



  Page Loads:

Technorati Profile

Click for Hyderabad, India Forecast

Tag Cloud

Archives

Post Categories

Friend's Blog's

Groups

Sites

Testing Blog's

Investing in Software Testing

What Does Quality Cost?

The title of Phil Crosby book says it all: Quality Is Free. Why is quality free? Like Crosby and J.M. Juran, Jim Campenella also illustrates a technique for analyzing the costs of quality in Principles of Quality Costs. Campenella breaks down those costs as follows:

Cost of Quality = Cost of conformance + Cost of nonconformance

Conformance Costs include Prevention Costs and Appraisal Costs.
Prevention costs include money spent on quality assurance tasks like training, requirements and code reviews, and other activities that promote good software. Appraisal costs include money spent on planning test activities, developing test cases and data, and executing those test cases once.

Nonconformance costs come in two flavors: Internal Failures and External Failures. The costs of internal failure include all expenses that arise when test cases fail the first time they are run, as they often do. A programmer incurs a cost of internal failure while debugging problems found during her own unit and component testing.

Once we get into formal testing in an independent test team, the costs of internal failure increase. Think through the process: The tester researches and reports the failure, the programmer finds and fixes the fault, the release engineer produces a new release, the system administration team installs that release in the test environment, and the tester retests the new release to confirm the fix and to check for regression.

The costs of external failure are those incurred when, rather than a tester finding a bug, the customer does. These costs will be even higher than those associated with either kind of internal failure, programmer-found or tester-found. In these cases, not only does the same process described for tester-found bugs occur, but you also incur the technical support overhead and the more expensive process of releasing a fix to the field rather than to the test lab. In addition, consider the intangible costs: angry customers, damage to the company image, lost business, and maybe even lawsuits.

Two observations lay the foundation for the enlightened view of testing as an investment. First, like any cost equation in business, we will want to minimize the cost of quality. Second, while it is often cheaper to prevent problems than to repair them, if we must repair problems, internal failures cost less than external failures.

The Risks to System Quality

Myriad risks - i.e., factors possibly leading to loss or injury menace software development. When these risks become realities, some projects fail. Wise project managers plan for and manage risks. In any software development project, we can group risks into four categories.
Financial risks: How might the project overrun the budget?
Schedule risks: How might the project exceed the allotted time?
Feature risks: How might we build the wrong product?
Quality risks: How might the product lack customer-satisfying behaviors or possess customer-dissatisfying behaviors?

Testing allows us to assess the system against the various risks to system quality, which allows the project team to manage and balance quality risks against the other three areas.

Classes of Quality Risks
It's important for test professionals to remember that many kinds of quality risks exist. The most obvious is functionality: Does the software provide all the intended capabilities? For example, a word processing program that does not support adding new text in an existing document is worthless.
While functionality is important, remember my self-deprecating anecdote in the last article. In that example, my test team and I focused entirely on functionality to the exclusion of important items like installation. In general, it's easy to over-emphasize a single quality risk and misalign the testing effort with customer usage. Consider the following examples of other classes of quality risks.

  • Use cases: working features fail when used in realistic sequences.
  • Robustness: common errors are handled improperly.
  • Performance: the system functions properly, but too slowly.
  • Localization: problems with supported languages, time zones, currencies, etc.
  • Data quality: a database becomes corrupted or accepts improper data.
  • Usability: the software's interface is cumbersome or inexplicable.
  • Volume/capacity: at peak or sustained loads, the system fails.
  • Reliability: too often -- especially at peak loads -- the system crashes, hangs, kills sessions, and so forth.

    Tailoring Testing to Quality Risk Priority

    To provide maximum return on the testing investment, we have to adjust the amount of time, resources, and attention we pay to each risk based on its priority. The priority of a risk to system quality arises from the extent to which that risk can and might affect the customers’ and users’ experiences of quality. In other words, the more likely a problem or the more serious the impact of a problem, the more testing that problem area deserves.

    You can prioritize in a number of ways. One approach I like is to use a descending scale from one (most risky) to five (least risky) along three dimensions.

    Severity: How dangerous is a failure of the system in this area?
    Priority: How much does a failure of the system in this area compromise the value of the product to customers and users?
    Likelihood: What are the odds that a user will encounter a failure in this area, either due to usage profiles or the technical risk of the problem?

    Many such scales exist and can be used to quantify levels of quality risk.

    Analyzing Quality Risks

    A slightly more formal approach is the one described in the International Standards Organization document ISO 9126. This standard proposes that the quality of a software system can be measured along six major characteristics:

    Functionality: Does the system provide the required capabilities?
    Reliability: Does the system work as needed when needed?
    Usability: Is the system intuitive, comprehensible, and handy to the users?
    Efficiency: Is the system sparing in its use of resources?
    Maintainability: Can operators, programmers, and customers upgrade the system as needed?
    Performance: Does the system fulfill the users’ requests speedily?

    Not every quality risk can be a high priority. When discussing risks to system quality, I don’t ask people, "Do you want us to make sure this area works?" In the absence of tradeoffs, everyone wants better quality. Setting the standard for quality higher requires more money spent on testing, pushes out the release date, and can distract from more important priorities—like focusing the team on the next release. To determine the real priority of a potential problem, ask people, "How much money, time, and attention would you be willing to give to problems in this area? Would you pay for an extra tester to look for bugs in this area, and would you delay shipping the product if that tester succeeded in finding bugs?" While achieving better quality generates a positive return on investment in the long run, as with the stock market, you get a better return on investment where the risk is higher. Happily, unlike the stock market, the risk of your test effort failing does not increase when you take on the most important risks to system quality, but rather your chances of test success increase.

  • Print | posted on Monday, October 27, 2003 8:33 PM | Filed Under [ Quality Software Testing Risk Management ]

    Feedback

    Gravatar

    # re: Investing in Software Testing

    Good Article
    10/28/2003 4:40 AM | Manish Khandat
    Gravatar

    # re: Investing in Software Testing

    Thank you very much Manish
    10/28/2003 10:05 PM | Siva
    Gravatar

    # re: Investing in Software Testing

    Cool one.
    2/23/2005 10:58 AM | Ashok kumar raja
    Gravatar

    # re: Investing in Software Testing

    Do you have any idea how much of the Project Cost gets attributed to Software testing - Any broad guidelines
    10/4/2006 6:56 PM | Ananth
    Gravatar

    # re: Investing in Software Testing

    Very good article
    9/6/2007 2:51 AM | Leen
    Gravatar

    # re: Investing in Software Testing

    Thank you making some useful points!

    I would like to introduce a good blog, Software Testing Space, which has a number of useful posts on software testing, Have a look at http://inderpsingh.blogspot.com/
    1/7/2010 9:54 AM | Inder P Singh
    Post A Comment
    Title:
    Name:
    Email:
    Comment:
    Verification:
     

    Powered by: