Test Notes

I don't make software; I make it better
posts - 78 , comments - 60 , trackbacks - 616

Tuesday, November 4, 2003

How Many Bugs Do Regression Tests Find?

What percentage of bugs are found by rerunning tests? That is, what's the value of this equation:

number of bugs in a release found by re-executing tests
100 X ------------------------------------------------------------------------------- ?
number of bugs found by running all tests (for 1st or Nth time)

Excellent article, click on the title for more.......

Posted On Tuesday, November 4, 2003 4:34 AM | Comments (0) | Filed Under [ Software Testing ]

Metrics for evaluating application system testing

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system
Number of tests per unit size = Number of test cases per KLOC/FP
Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
Defects per size = Defects detected / system size
Test cost (in %) = Cost of testing / total cost *100
Cost to locate defect = Cost of testing / the number of defects located
Achieving Budget = Actual cost of testing / Budgeted cost of testing
Defects detected in testing = Defects detected in testing / total system defects
Defects detected in production = Defects detected in production/system size
Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
System complaints = Number of third party complaints / number of transactions processed
Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
Source Code Analysis = Number of source code statements changed / total number of tests.
Effort Productivity =
Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Posted On Tuesday, November 4, 2003 4:31 AM | Filed Under [ Software Testing Quality Metrics ]

Common definitions for testing - A Set of Testing Myths:

“Testing is the process of demonstrating that defects are not present in the application that was developed.”

“Testing is the activity or process which shows or demonstrates that a program or system performs all intended functions correctly.”

“Testing is the activity of establishing the necessary “confidence” that a program or system does what it is supposed to do, based on the set of requirements that the user has specified.”

These myths are still entrenched in much of how we collectively view testing and this mind-set sets us up for failure even before we start really testing! So what is the real definition of testing?

“Testing is the process of executing a program/system with the intent of finding errors.”

The primary axiom for the testing equation within software development is this:

“A test when executed that reveals a problem in the software is a success.”

Posted On Tuesday, November 4, 2003 4:29 AM | Comments (5) | Filed Under [ Software Testing ]

Risk Management

Risk avoidance: Risk is avoided by obviating the possibility that the undesirable event will happen. You refuse to commit to meeting milestone M by feature F - don't sign the contract until the software is done. This avoids the risk. As long as you enter into the contract to deliver specific scope by a specific date, the risk that it won't come about exists.

Risk reduction: this consists of minimizing the likelihood of the undesirable event. XP reduces the likelihood that you will lack some features at each milestone by reducing the amount of "extra" work to be done, such as paperwork or documentation, and improving overall quality so as to make development faster.

Risk mitigation: this consists of minimizing the impact of the undesirable event. XP has active mitigation for the "schedule risk", by insisting that the most valuable features be done first; this reduces the likelihood that important features will be left out of milestone M.

Risk acceptance: just grit your teeth and take your beating. So we're missing feature F by milestone M - we'll ship with what we have by that date. After reduction and mitigation, XP manages any residual risk this way.

Risk transfer: this consists of getting someone else to take the risk in your place. Insurance is a risk transfer tactic. You pay a definite, known-with-certainty amount of money; the insurer will reimburse you if the risk of not completing feature F by milestone M materializes. No provision in XP. Has anyone ever insured a software project against schedule/budget overrun ?

Contingency planning: substituting one risk for another, so that if the undesirable event occurs you have a "Plan B" which can compensate for the ill consequences. If we miss critical milestone M1 with feature set F1, we'll shelve the project and reassign all resources to our back-burner project which is currently being worked on by interns.

Key point from all the above: risk management starts with identifying specific risks. Also, I think you can perform conscious risk management using any process, method, technique or approach. It's important to recognize that any process, etc. simply changes the risk landscape; your project will always have one single biggest risk, then a second biggest risk, and so on.

Also: risks, like requirements, don't have the courtesy to stay put over the life of a project. They will change - old ones will bow out as risk tactics take effect, new ones will take their place.

Risk management is like feedback. If you're not going to pay attention to it, you're wasting your time. More than once I've tried to adopt a risk-oriented approach to projects, only to have management react something like, "Oh, you think that's a risk. Well, thank you for telling us. We're happy to have had that risk reduced. Now proceed as before."

One risk I often raise in projects is skills risk. Developers are supposed to crank out Java code who have only ever written Visual Basic, that sort of thing. Not once have I seen a response of risk avoidance (substituting other, trained team members for the unskilled ones), reduction (training the worker in Java), or mitigation (making provision for closer review of the person's code). It's always been acceptance - "We know it's less than ideal to have this guy working on that project, but he's what we've got at the moment. Can't hire anyone on short order, no time for training, no time for more reviews."

If you only ever have one tactic for dealing with risk, your risk "management" is a no-brainer.

---- From the Laurent Bossavit  weblog

Posted On Tuesday, November 4, 2003 1:56 AM | Comments (13) | Filed Under [ Risk Management ]

Three Questions About Each Bug You Find

1. Is this mistake somewhere else also?

2. What next bug is hidden behind this one?

3. What should I do to prevent bugs like this?

Posted On Tuesday, November 4, 2003 1:54 AM | Filed Under [ Software Testing Testing/QA-FAQ ]

TEST AUTOMATION FRAMEWORKS

An excellent ebook by "Carl Nagle"

Posted On Tuesday, November 4, 2003 1:53 AM | Comments (0) | Filed Under [ Automated Testing ]

The Product Quality Measures

1. Customer satisfaction index
(Quality ultimately is measured in terms of customer satisfaction.)
Surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires)
Number of system enhancement requests per year
Number of maintenance fix requests per year
User friendliness: call volume to customer service hotline
User friendliness: training time per new user
Number of product recalls or fix releases (software vendors)
Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities
Normalized per function point (or per LOC)
At product delivery (first 3 months or first year of operation)
Ongoing (per year of operation)
By level of severity
By category or cause, e.g.: requirements defect, design defect, code defect,
documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users
Turnaround time for defect fixes, by level of severity
Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility
Ratio of maintenance fixes (to repair the system & bring it into
compliance with specifications), vs. enhancement requests
(requests by users to enhance or change functionality)

5. Defect ratios
Defects found after product delivery per function point
Defects found after product delivery per LOC
Pre-delivery defects: annual post-delivery defects
Defects per function point of the system modifications

6. Defect removal efficiency
Number of post-release defects (found by clients in field operation),
categorized by level of severity
Ratio of defects found internally prior to release (via inspections and testing),
as a percentage of all defects
All defects include defects found internally plus externally (by
customers) in the first year after product delivery

7. Complexity of delivered product
McCabe's cyclomatic complexity counts across the system
Halstead’s measure
Card's design complexity measures
Predicted defects and maintenance costs, based on complexity measures

8. Test coverage
Breadth of functional coverage
Percentage of paths, branches or conditions that were actually tested
Percentage by criticality level: perceived level of risk of paths
The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects
Business losses per defect that occurs during operation
Business interruption costs; costs of work-arounds
Lost sales and lost goodwill
Litigation costs resulting from defects
Annual maintenance cost (per function point)
Annual operating cost (per function point)
Measurable damage to your boss's career

10. Costs of quality activities
Costs of reviews, inspections and preventive measures
Costs of test planning and preparation
Costs of test execution, defect tracking, version and change control
Costs of diagnostics, debugging and fixing
Costs of tools and tool support
Costs of test case library maintenance
Costs of testing & QA education associated with the product
Costs of monitoring and oversight by the QA organization
(if separate from the development and test organizations)

11. Re-work
Re-work effort (hours, as a percentage of the original coding hours)
Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
Re-worked software components (as a percentage of the total delivered components)

12. Reliability
Availability (percentage of time a system is available, versus the time
the system is needed to be available)
Mean time between failure (MTBF)
Mean time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Number of product recalls or fix releases
Number of production re-runs as a ratio of production runs

Posted On Tuesday, November 4, 2003 1:51 AM | Filed Under [ Quality Metrics ]

Powered by: