Test Notes

I don't make software; I make it better
posts - 78 , comments - 60 , trackbacks - 616

Monday, October 27, 2003

What is Quality?

The definition of the term quality is an issue. Based interesting discussion of the meaning of Quality, a surprising number of people still think software quality is simply the absence of errors. Dictionary definitions are too vague to be of much help. The only relevant definition offered by the Oxford English Dictionary (Oxford, 1993), for instance, is peculiar excellence or superiority. Noteworthy here is that quality cannot be discussed for something in isolation: comparison is intrinsic.

Many software engineering references define software quality as correct implementation of the specification. Such a definition can be used during product development, but it is inadequate for facilitating comparisons between products. Standards organizations have tended to refer to meeting needs or expectations, e.g. the ISO defines quality as the totality of features and characteristics of a product or service that bears on its ability to satisfy stated or implied needs.

 IEEE defines quality as (1) The degree to which a system, component, or process meets specified requirements. (2) The degree to which a system, component, or process meets customer or user needs or expectations. An older IEEE defines Software quality is the degree to which software possesses a desired combination of attributes.

Quality has been variously defined as:

  • Excellence (Socrates, Plato, Aristole)
  • Value (Feigenbaum 1951, Abbot 1955)
  • Conformance to specification (Levitt 1972, Gilmore 1974)
  • Fit for purpose (Juran 1974)
  • Meeting or exceeding, customers’ expectations (Gronroos 1983, Parasuraman & Ziethaml & Berry 1985)
  • Loss avoidance (Taguchi 1989)

    In short these six definitions show different aspects of quality. All can be applied to software development. We often find our products marketed for their excellence. We want to delight our customers with our products to build a long term business relationship. Many countries trade laws oblige us to sell the product only when fit for the purpose to which our customer tells us they will put it. When purchasing managers look at our software, they may judge comparable products on value knowing that this may stop them buying the excellent product. In managing the software development, efficiency and effective development processes together help avoid losses through rework and reducing later support and maintenance budgets. In testing, we work to see that the product conforms to specification.

    Thanks to Carol Long

  • Posted On Monday, October 27, 2003 9:53 PM | Comments (17) | Filed Under [ Quality ]

    The Software Inspection Process

    Great place for Software Inspection Process.  This site will give you the detailed description of all stages in Software Inspection Process.  It also lists different reports produced after inspection.


    Posted On Monday, October 27, 2003 9:49 PM | Comments (0) | Filed Under [ Quality ]

    Reviews, Inspections, and Walkthroughs

    In a review , a work product is examined for defects by individuals other than the person who produced it.  A Work Product is any important deliverable created during the requirements, design, coding, or testing phase of software development. 

    Research shows that reviews are one of the best ways to ensure quality requirements, giving you as high as a 10 to 1 return on investment.  Reviews help you to discover defects and to ensure product compliance to specifications, standards or regulations

    Software Inspections are a disciplined engineering practice for detecting and correcting defects in software artifacts, and preventing their leakage into field operations. 

    Software Inspections are a reasoning activity performed by practitioners playing the defined roles of Moderator, Recorder, Reviewer, Reader, and Producer. 

    Moderator: Responsible for ensuring that the inspection procedures are performed through out the entire inspection process.  The responsibilities include

  • Verifying the work products readiness for inspection
  • Verifying that the entry criteria is met
  • Assembling an effective inspection team
  • Keeping the inspection meeting on track
  • Verifying that the exist criteria is met

    Recorder: The Recorder will document all defects that arise from the inspection meeting.  This documentation will include where the defects was found.  Additionally, every defect is assigned a defect category and type. 

    Reviewer: All of the Inspection Team individuals are also considered to play the Reviewer role, independent of other roles assigned.  The Inspector role is responsible for analyzing and detecting defects within the work product. 

    Reader: The reader is responsible for leading the Inspection Team through the inspection meeting by reading aloud small logical units, paraphrasing where appropriate

    Producer: The person who originally constructed the work product.  The individual that assumes the role of Producer will be ultimately responsible for updating the work product after the inspection. 

    In a Walkthrough, the producer describes the product and asks for comments from the participants.  These gatherings generally serve to inform participants about the product rather than correct it. 

  • Posted On Monday, October 27, 2003 9:39 PM | Comments (14) | Filed Under [ Quality Software Testing ]

    XP Testing Without XP: Taking Advantage of Agile Testing Practices

    An excellent article written by Lisa Crispin "XP Testing Without XP: Taking Advantage of Agile Testing Practices"

    Posted On Monday, October 27, 2003 9:27 PM | Comments (2) | Filed Under [ Agile Testing Practices ]

    Investing in Software Testing

    What Does Quality Cost?

    The title of Phil Crosby book says it all: Quality Is Free. Why is quality free? Like Crosby and J.M. Juran, Jim Campenella also illustrates a technique for analyzing the costs of quality in Principles of Quality Costs. Campenella breaks down those costs as follows:

    Cost of Quality = Cost of conformance + Cost of nonconformance

    Conformance Costs include Prevention Costs and Appraisal Costs.
    Prevention costs include money spent on quality assurance tasks like training, requirements and code reviews, and other activities that promote good software. Appraisal costs include money spent on planning test activities, developing test cases and data, and executing those test cases once.

    Nonconformance costs come in two flavors: Internal Failures and External Failures. The costs of internal failure include all expenses that arise when test cases fail the first time they are run, as they often do. A programmer incurs a cost of internal failure while debugging problems found during her own unit and component testing.

    Once we get into formal testing in an independent test team, the costs of internal failure increase. Think through the process: The tester researches and reports the failure, the programmer finds and fixes the fault, the release engineer produces a new release, the system administration team installs that release in the test environment, and the tester retests the new release to confirm the fix and to check for regression.

    The costs of external failure are those incurred when, rather than a tester finding a bug, the customer does. These costs will be even higher than those associated with either kind of internal failure, programmer-found or tester-found. In these cases, not only does the same process described for tester-found bugs occur, but you also incur the technical support overhead and the more expensive process of releasing a fix to the field rather than to the test lab. In addition, consider the intangible costs: angry customers, damage to the company image, lost business, and maybe even lawsuits.

    Two observations lay the foundation for the enlightened view of testing as an investment. First, like any cost equation in business, we will want to minimize the cost of quality. Second, while it is often cheaper to prevent problems than to repair them, if we must repair problems, internal failures cost less than external failures.

    The Risks to System Quality

    Myriad risks - i.e., factors possibly leading to loss or injury menace software development. When these risks become realities, some projects fail. Wise project managers plan for and manage risks. In any software development project, we can group risks into four categories.
    Financial risks: How might the project overrun the budget?
    Schedule risks: How might the project exceed the allotted time?
    Feature risks: How might we build the wrong product?
    Quality risks: How might the product lack customer-satisfying behaviors or possess customer-dissatisfying behaviors?

    Testing allows us to assess the system against the various risks to system quality, which allows the project team to manage and balance quality risks against the other three areas.

    Classes of Quality Risks
    It's important for test professionals to remember that many kinds of quality risks exist. The most obvious is functionality: Does the software provide all the intended capabilities? For example, a word processing program that does not support adding new text in an existing document is worthless.
    While functionality is important, remember my self-deprecating anecdote in the last article. In that example, my test team and I focused entirely on functionality to the exclusion of important items like installation. In general, it's easy to over-emphasize a single quality risk and misalign the testing effort with customer usage. Consider the following examples of other classes of quality risks.

  • Use cases: working features fail when used in realistic sequences.
  • Robustness: common errors are handled improperly.
  • Performance: the system functions properly, but too slowly.
  • Localization: problems with supported languages, time zones, currencies, etc.
  • Data quality: a database becomes corrupted or accepts improper data.
  • Usability: the software's interface is cumbersome or inexplicable.
  • Volume/capacity: at peak or sustained loads, the system fails.
  • Reliability: too often -- especially at peak loads -- the system crashes, hangs, kills sessions, and so forth.

    Tailoring Testing to Quality Risk Priority

    To provide maximum return on the testing investment, we have to adjust the amount of time, resources, and attention we pay to each risk based on its priority. The priority of a risk to system quality arises from the extent to which that risk can and might affect the customers’ and users’ experiences of quality. In other words, the more likely a problem or the more serious the impact of a problem, the more testing that problem area deserves.

    You can prioritize in a number of ways. One approach I like is to use a descending scale from one (most risky) to five (least risky) along three dimensions.

    Severity: How dangerous is a failure of the system in this area?
    Priority: How much does a failure of the system in this area compromise the value of the product to customers and users?
    Likelihood: What are the odds that a user will encounter a failure in this area, either due to usage profiles or the technical risk of the problem?

    Many such scales exist and can be used to quantify levels of quality risk.

    Analyzing Quality Risks

    A slightly more formal approach is the one described in the International Standards Organization document ISO 9126. This standard proposes that the quality of a software system can be measured along six major characteristics:

    Functionality: Does the system provide the required capabilities?
    Reliability: Does the system work as needed when needed?
    Usability: Is the system intuitive, comprehensible, and handy to the users?
    Efficiency: Is the system sparing in its use of resources?
    Maintainability: Can operators, programmers, and customers upgrade the system as needed?
    Performance: Does the system fulfill the users’ requests speedily?

    Not every quality risk can be a high priority. When discussing risks to system quality, I don’t ask people, "Do you want us to make sure this area works?" In the absence of tradeoffs, everyone wants better quality. Setting the standard for quality higher requires more money spent on testing, pushes out the release date, and can distract from more important priorities—like focusing the team on the next release. To determine the real priority of a potential problem, ask people, "How much money, time, and attention would you be willing to give to problems in this area? Would you pay for an extra tester to look for bugs in this area, and would you delay shipping the product if that tester succeeded in finding bugs?" While achieving better quality generates a positive return on investment in the long run, as with the stock market, you get a better return on investment where the risk is higher. Happily, unlike the stock market, the risk of your test effort failing does not increase when you take on the most important risks to system quality, but rather your chances of test success increase.

  • Posted On Monday, October 27, 2003 8:33 PM | Comments (6) | Filed Under [ Quality Software Testing Risk Management ]

    Manual or Automated?

    Summary:Automated test tools are powerful aids to improving the return on the testing investment when used wisely. Some tests inherently require an automated approach to be effective, but others must be manual. In addition, automated testing projects that fail are expensive and politically dangerous. How can we recognize whether to automate a test or run it manually, and how much money should we spend on a test?

    When Test Automation Makes Sense

    Let’s start with the tests that ideally are automated. These include:

    • Regression and confirmation. Rerunning a test against a new release to ensure that behavior remains unbroken—or to confirm that a bug fix did indeed fix the underlying problem—is a perfect fit for automated testing. The business case for test automation outlined in Software Test Automation by Mark Fewster and Dorothy Graham is built around this kind of testing.

  • Monkey (or random). Tests that fire large amounts or long sequences of data, transactions, or other inputs at a system in a random search for errors are easily and profitably automated

  • Load, volume, and capacity. Sometimes, systems must support tremendous loads. On one project, we had to test how the system would respond to 50,000 simultaneous users, which ruled out manual testing! Two Linux systems running custom load-generating programs filled the bill.

  • Performance and reliability. With the rise of Web-based systems, more and more automated testing is aimed at looking for slow or flaky behavior on Web systems.

  • Structural, especially API-based unit, component, and integration. Most structural testing involves harnesses of some sort, which brings you most of the way into automation. Again, the article I wrote with Greg Kubaczkowski, "Mission Made Possible" (STQE magazine, July/Aug. 2002), provides an example.

    Other tests that are well-suited for automation exist, such as the static testing of complexity and code standards compliance that I mentioned in the previous article. In general, automated tests have higher upfront costs—tools, test development, environments, and so forth—and lower costs to repeat the test.

    When to Focus on Manual Testing


    • High per-test or maintenance costs are one indicator that a test should be done manually. Another is the need for human judgment to assess the correctness of the result or extensive, ongoing human intervention to keep the test running. For these reasons, the following tests are a good fit for manual testing:

  • Installation, setup, operations, and maintenance. In many cases, these tests involve loading CD-ROMs and tapes, changing hardware, and other ongoing hand-holding by the tester.

  • Configuration and compatibility. Like operations and maintenance testing, these tests require reconfiguring systems and networks, installing software and hardware, and so forth, all requiring human intervention.

  • Error handling and recovery. Again, the need to force errors—by powering off a server, for example—means that people must stay engaged during test execution.

  • Localization. Only a human tester with appropriate skills can decide whether a translation makes no sense, is culturally offensive, or is otherwise inappropriate. (Currency, date, and time testing can be automated, but the need to rerun these tests for regression is limited.)

  • Usability. As with localization, human judgment is needed to check for problems with the facility, simplicity, and elegance of the user interface and workflows.

  • Documentation and help. Like usability and localization, checking documentation requires human judgment.

    Wildcards

    In some cases, tests can be done manually, be automated, or both.


    • Functional. Functionality testing can often be automated, and automated functional testing is often part of an effort to create a regression test suite or smoke test. However, it makes sense to get the testing process under control manually before trying to automate functional testing. In addition, you’ll want to keep some of the testing manual.

  • Use cases (user scenarios). By stringing together functional tests into workflows, you can create realistic user scenarios, whether manual or automated. The trick here is to avoid automation if many workflows involve human intervention.

  • User interface. Basic testing of the user interface can be automated, but beware of frequent or extensive changes to the user interface that can incur high maintenance costs for your automated suite.

  • Date and time handling. If the test system can reset the computer’s clocks automatically, then you can automate these tests.

    Higher per-test costs and needs for human skills, judgment, and interaction push towards manual testing. A need to repeat tests many times or reduce the cycle time for test execution pushes towards automated testing.

    Reasons to Be Careful with Automation

    Automated testing is a huge investment, one of the biggest that organizations make in testing. Tool licenses can easily hit six or seven figures. Neophytes can’t use most of these tools—regardless of what any glossy test tool brochure says—so training, consulting, and expert contractors can cost more than the tools themselves. Then there’s maintenance of the test scripts, which generally is more difficult and time consuming than maintaining manual test cases.

  • Posted On Monday, October 27, 2003 8:27 PM | Filed Under [ Automated Testing Software Testing ]

    Quality Guru's

    The early Americans

    W Edwards Deming introduced concepts of variation to the Japanese and also a systematic approach to problem solving, which later became known as the Deming or PDCA cycle. Later in the West he concentrated on management issues and produced his famous 14 Points. He remains active today and he has attempted a summary of his 60 years experience in his System of Profound Knowledge

    Deming encouraged the Japanese to adopt a systematic approach to problem solving, which later became known as the Deming or PDCA (Plan, Do, Check, Action) cycle. He also pushed senior managers to become actively involved in their company's quality improvement programmes.

    Deming produced his 14 Points for Management, in order to help people understand and implement the necessary transformation. Deming said that adoption of, and action on, the 14 points are a signal that management intend to stay in business. They apply to small or large organisations, and to service industries as well as to manufacturing. However the 14 points should not be seen as the whole of his philosophy, or as a recipe for improvement. They need careful discussion in the context of one's own organisation.

    Before his death Deming appears to have attempted a summary of his 60 years' experience. This he called the System of Profound Knowledge. It describes four interrelated parts:

    Appreciation for a system
    This emphasises the need for managers to understand the relationships between functions and activities. Everyone should understand that the long term aim is for everybody to gain - employees, share holders, customers, suppliers, and the environment. Failure to accomplish the aim causes loss to everybody in the system.

    Knowledge of statistical theory
    This includes knowledge about variation, process capability, control charts, interactions and loss function. All these need to be understood to accomplish effective leadership, teamwork etc.

    Theory of knowledge
    All plans require prediction based on past experience. An example of success cannot be successfully copied unless the theory is understood.

    Knowledge of psychology
    It is necessary to understand human interactions. Differences between people must be used for optimisation by leaders. People have intrinsic motivation to succeed in many areas. Extrinsic motivators in employment may smother intrinsic motivation. These include pay rises and performance grading, although these are sometimes viewed as a way out for managers.

    Joseph M Juran focused on Quality Control as an integral part of management control in his lectures to the Japanese in the early 1950s. He believes that Quality does not happen by accident, it must be planned, and that Quality Planning is part of the trilogy of planning, control and improvement. He warns that there are no shortcuts to quality.

    There are many aspects to Juran's message on quality. Intrinsic is the belief that quality does not happen by accident, it must be planned.  His recent book Juran on Planning for Quality is perhaps the definitive guide to Juran's current thoughts and his structured approach to company-wide quality planning. His earlier Quality Control Handbook was much more technical in nature.

    Juran sees quality planning as part of the quality trilogy of quality planning, quality control and quality improvement. The key elements in implementing company-wide strategic quality planning are in turn seen as identifying customers and their needs; establishing optimal quality goals; creating measurements of quality; planning processes capable of meeting quality goals under operating conditions; and producing continuing results in improved market share, premium prices, and a reduction of error rates in the office and factory.

    Juran's Quality Planning Road Map consists of the following steps:

  • Identify who are the customers.
  • Determine the needs of those customers.
  • Translate those needs into our language.
  • Develop a product that can respond to those needs.
  • Optimise the product features so as to meet our needs as well as customer needs.
  • Develop a process which is able to produce the product.
  • Optimise the process.
  • Prove that the process can produce the product under operating conditions.
  • Transfer the process to Operations.

    Illustration of Quality Trilogy via a Control Chart

    Juran concentrates not just on the end customer, but identifies other external and internal customers. This effects his concept of quality since one must also consider the 'fitness of use' of the interim product for the following internal customers. He illustrates this idea via the Quality Spiral.

    His formula for results is:

  • Establish specific goals to be reached.
  • Establish plans for reaching the goals.
  • Assign clear responsibility for meeting the goals.
  • Base the rewards on results achieved.

    Dr Juran warns that there are no shortcuts to quality and is sceptical of companies that rush into applying Quality Circles, since he doubts their effectiveness in the West. He believes that the majority of quality problems are the fault of poor management, rather than poor workmanship on the shop-floor. In general, he believes that management controllable defects account for over 80% of the total quality problems. Thus he claims that Philip Crosby's Zero Defects approach does not help, since it is mistakenly based on the idea that the bulk of quality problems arise because workers are careless and not properly motivated.

    Armand V Feigenbaum is the originator of Total Quality Control. He sees quality control as a business method rather than technically, and believes that quality has become the single most important force leading to organisational success and growth.

    Dr Armand V Feigenbaum is the originator of Total Quality Control. The first edition of his book Total Quality Control was completed whilst he was still a doctoral student at MIT.

    In his book Quality Control: Principles, Practices and Administration, Feigenbaum strove to move away from the then primary concern with technical methods of quality control, to quality control as a business method. Thus he emphasised the administrative viewpoint and considered human relations as a basic issue in quality control activities. Individual methods, such as statistics or preventive maintenance, are seen as only segments of a comprehensive quality control programme.

    Quality control itself is defined as:
    'An effective system for co-ordinating the quality maintenance and quality improvement efforts of the various groups in an organisation so as to enable production at the most economical levels which allow for full customer satisfaction.'

    He stresses that quality does not mean best but best for the customer use and selling price. The word control in quality control represents a management tool with 4 steps:

  • Setting quality standards
  • Appraising conformance to these standards
  • Acting when standards are exceeded
  • Planning for improvements in the standards.

    Quality control is seen as entering into all phases of the industrial production process, from customer specification and sale through design, engineering and assembly, and ending with shipment of product to a customer who is happy with it. Effective control over the factors affecting product quality is regarded as requiring controls at all important stages of the production process. These controls or jobs of quality control can be classified as:

  • New-design control
  • Incoming material control
  • Product control
  • Special process studies.

    Quality is seen as having become the single most important force leading to organisational success and company growth in national and international markets. Further, it is argued that:

    Quality is in its essence a way of managing the organisation and that, like finance and marketing, quality has now become an essential element of modern management.

    Thus a Total Quality System is defined as:

    The agreed company-wide and plantwide operating work structure, documented in effective, integrated technical and managerial procedures, for guiding the co-ordinated actions of the people, the machines and the information of the company and plant in the best and most practical ways to assure customer quality satisfaction and economical costs of quality.

    Operating quality costs are divided into:

  • Prevention costs including quality planning.
  • Appraisal costs including inspection.
  • Internal failure costs including scrap and rework.
  • External failure costs including warranty costs, complaints etc.

    Reductions in operating quality costs result from setting up a total quality system for two reasons:

  • Lack of existing effective customer-orientated customer standards may mean current quality of products is not optimal given use
  • Expenditure on prevention costs can lead to a severalfold reduction in internal and external failure costs.

    The new 40th Anniversary edition of Dr A V Feigenbaum's book, Total Quality Control, now further defines TQC for the 1990s in the form of ten crucial benchmarks for total quality success. These are that:

  • Quality is a company-wide process.
  • Quality is what the customer says it is.
  • Quality and cost are a sum, not a difference.
  • Quality requires both individual and team zealotry.
  • Quality is a way of managing.
  • Quality and innovation are mutually dependent.
  • Quality is an ethic.
  • Quality requires continuous improvement.
  • Quality is the most cost-effective, least capital-intensive route to productivity.
  • Quality is implemented with a total system connected with customers and suppliers.

    These are the ten benchmarks for total quality in the 1990s. They make quality a way of totally focusing the company on the customer - whether it be the end user or the man or woman at the next work station or next desk. Most importantly, they provide the company with foundation points for successful implementation of its international quality leadership.

  • Posted On Monday, October 27, 2003 8:12 PM | Filed Under [ Quality ]

    Powered by: