Test Notes

I don't make software; I make it better
posts - 78 , comments - 60 , trackbacks - 616

My Links

News

  

Please Note
The information in this weblog is provided “AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my opinion. Inappropriate comments will be deleted at the authors discretion.


Google 

Groups
SoftwareTesting
Browse Archives at
groups.google.com



  Page Loads:

Technorati Profile

Click for Hyderabad, India Forecast

Tag Cloud

Article Categories

Archives

Image Galleries

Friend's Blog's

Groups

Sites

Testing Blog's

Wednesday, November 4, 2009

HP Quality Center 10 from a Test Manager’s Perspective

Very informative article on HP Quality Center 10.  dont miss the comments at the end, which gives very useful information. 

http://www.beteoblog.com/2009/03/02/hp-quality-center-10-from-a-test-manager%e2%80%99s-perspective/

 

Posted On Wednesday, November 4, 2009 5:47 PM | Comments (0) | Filed Under [ Software Testing ]

HP Quality Center vs. IBM Rational Quality Manager

Great article. 

http://www.beteoblog.com/2009/04/20/hp-quality-center-vs-ibm-rational-quality-manager/comment-page-1/#comment-888

 

Posted On Wednesday, November 4, 2009 11:22 AM | Comments (0) | Filed Under [ Software Testing ]

Sunday, October 28, 2007

MSDN - Test Center

Microsoft MSDN Launched "Test Center" :  A community where software testers can share knowledge and learn from each other. 

 

 

Posted On Sunday, October 28, 2007 6:00 PM | Comments (3) | Filed Under [ Software Testing ]

Thursday, March 17, 2005

New Software Testing Group

I have recently created a new Software Testing group. Let us share our knowledge using this group. Join this group and post only Software Testing/Quality/Automated Testing related questions......

Link: http://groups-beta.google.com/group/SoftwareTesting

Posted On Thursday, March 17, 2005 3:41 PM | Filed Under [ Personal ]

Monday, October 27, 2003

XP Testing Without XP: Taking Advantage of Agile Testing Practices

An excellent article written by Lisa Crispin "XP Testing Without XP: Taking Advantage of Agile Testing Practices"

Posted On Monday, October 27, 2003 9:27 PM | Comments (2) | Filed Under [ Agile Testing Practices ]

Tuesday, January 31, 2006

Test Metrics

One of the excellent resource for Testing Metrics.  You will find different software testing metrics, their definition, purpose, and how to calculate.  Must read.  Click the following link

http://www.stpmag.com/downloads/stp-0507_testmetrics.htm

Posted On Tuesday, January 31, 2006 11:44 PM | Comments (22) | Filed Under [ Quality Metrics ]

Thursday, March 31, 2005

Mercury Interactive plans training centres

MERCURY Interactive Corporation, the US-based provider of automated software-testing solutions, plans to start authorised training centres to be run by franchisees for software testing in India.

The company hopes to have one or two centres in Delhi, Mumbai, Chennai, Bangalore and Hyderabad initially, according to Mr T. Srinivasan, Managing Director, Mercury India.

for more read this

Posted On Thursday, March 31, 2005 3:19 PM | Filed Under [ Links/Resources ]

Friday, August 12, 2005

Best Practices in Software Test Automation

Very good article on best practices in Test Automation.  Must read for everyone who is into test automation

http://www.testfocus.co.za/Feature%20articles/july2005.htm

Posted On Friday, August 12, 2005 9:05 PM | Comments (18) | Filed Under [ Automated Testing ]

Record and Play for Mozilla / FireFox Browser

Source: QAnews.com

AdventNet QEngine  is the first tool to offer a single tool for record and play of Web Pages for the following environments.

Mozilla 1.5,1.6 and 1.7.3 [Linux and Windows ]
IE 6.0 [Windows]
QEngine release 5.1.0 (to be released by May 2005 ) wil have support for FireFox [ Windows and Linux ]

Posted On Friday, August 12, 2005 8:49 PM | Comments (4) | Filed Under [ Automated Testing ]

Tuesday, March 22, 2005

Black Box Software Testing

The best testing course by Cem Kaner & James Bach

Posted On Tuesday, March 22, 2005 7:49 PM | Filed Under [ Software Testing ]

Monday, October 27, 2003

Quality Guru's

The early Americans

W Edwards Deming introduced concepts of variation to the Japanese and also a systematic approach to problem solving, which later became known as the Deming or PDCA cycle. Later in the West he concentrated on management issues and produced his famous 14 Points. He remains active today and he has attempted a summary of his 60 years experience in his System of Profound Knowledge

Deming encouraged the Japanese to adopt a systematic approach to problem solving, which later became known as the Deming or PDCA (Plan, Do, Check, Action) cycle. He also pushed senior managers to become actively involved in their company's quality improvement programmes.

Deming produced his 14 Points for Management, in order to help people understand and implement the necessary transformation. Deming said that adoption of, and action on, the 14 points are a signal that management intend to stay in business. They apply to small or large organisations, and to service industries as well as to manufacturing. However the 14 points should not be seen as the whole of his philosophy, or as a recipe for improvement. They need careful discussion in the context of one's own organisation.

Before his death Deming appears to have attempted a summary of his 60 years' experience. This he called the System of Profound Knowledge. It describes four interrelated parts:

Appreciation for a system
This emphasises the need for managers to understand the relationships between functions and activities. Everyone should understand that the long term aim is for everybody to gain - employees, share holders, customers, suppliers, and the environment. Failure to accomplish the aim causes loss to everybody in the system.

Knowledge of statistical theory
This includes knowledge about variation, process capability, control charts, interactions and loss function. All these need to be understood to accomplish effective leadership, teamwork etc.

Theory of knowledge
All plans require prediction based on past experience. An example of success cannot be successfully copied unless the theory is understood.

Knowledge of psychology
It is necessary to understand human interactions. Differences between people must be used for optimisation by leaders. People have intrinsic motivation to succeed in many areas. Extrinsic motivators in employment may smother intrinsic motivation. These include pay rises and performance grading, although these are sometimes viewed as a way out for managers.

Joseph M Juran focused on Quality Control as an integral part of management control in his lectures to the Japanese in the early 1950s. He believes that Quality does not happen by accident, it must be planned, and that Quality Planning is part of the trilogy of planning, control and improvement. He warns that there are no shortcuts to quality.

There are many aspects to Juran's message on quality. Intrinsic is the belief that quality does not happen by accident, it must be planned.  His recent book Juran on Planning for Quality is perhaps the definitive guide to Juran's current thoughts and his structured approach to company-wide quality planning. His earlier Quality Control Handbook was much more technical in nature.

Juran sees quality planning as part of the quality trilogy of quality planning, quality control and quality improvement. The key elements in implementing company-wide strategic quality planning are in turn seen as identifying customers and their needs; establishing optimal quality goals; creating measurements of quality; planning processes capable of meeting quality goals under operating conditions; and producing continuing results in improved market share, premium prices, and a reduction of error rates in the office and factory.

Juran's Quality Planning Road Map consists of the following steps:

  • Identify who are the customers.
  • Determine the needs of those customers.
  • Translate those needs into our language.
  • Develop a product that can respond to those needs.
  • Optimise the product features so as to meet our needs as well as customer needs.
  • Develop a process which is able to produce the product.
  • Optimise the process.
  • Prove that the process can produce the product under operating conditions.
  • Transfer the process to Operations.

    Illustration of Quality Trilogy via a Control Chart

    Juran concentrates not just on the end customer, but identifies other external and internal customers. This effects his concept of quality since one must also consider the 'fitness of use' of the interim product for the following internal customers. He illustrates this idea via the Quality Spiral.

    His formula for results is:

  • Establish specific goals to be reached.
  • Establish plans for reaching the goals.
  • Assign clear responsibility for meeting the goals.
  • Base the rewards on results achieved.

    Dr Juran warns that there are no shortcuts to quality and is sceptical of companies that rush into applying Quality Circles, since he doubts their effectiveness in the West. He believes that the majority of quality problems are the fault of poor management, rather than poor workmanship on the shop-floor. In general, he believes that management controllable defects account for over 80% of the total quality problems. Thus he claims that Philip Crosby's Zero Defects approach does not help, since it is mistakenly based on the idea that the bulk of quality problems arise because workers are careless and not properly motivated.

    Armand V Feigenbaum is the originator of Total Quality Control. He sees quality control as a business method rather than technically, and believes that quality has become the single most important force leading to organisational success and growth.

    Dr Armand V Feigenbaum is the originator of Total Quality Control. The first edition of his book Total Quality Control was completed whilst he was still a doctoral student at MIT.

    In his book Quality Control: Principles, Practices and Administration, Feigenbaum strove to move away from the then primary concern with technical methods of quality control, to quality control as a business method. Thus he emphasised the administrative viewpoint and considered human relations as a basic issue in quality control activities. Individual methods, such as statistics or preventive maintenance, are seen as only segments of a comprehensive quality control programme.

    Quality control itself is defined as:
    'An effective system for co-ordinating the quality maintenance and quality improvement efforts of the various groups in an organisation so as to enable production at the most economical levels which allow for full customer satisfaction.'

    He stresses that quality does not mean best but best for the customer use and selling price. The word control in quality control represents a management tool with 4 steps:

  • Setting quality standards
  • Appraising conformance to these standards
  • Acting when standards are exceeded
  • Planning for improvements in the standards.

    Quality control is seen as entering into all phases of the industrial production process, from customer specification and sale through design, engineering and assembly, and ending with shipment of product to a customer who is happy with it. Effective control over the factors affecting product quality is regarded as requiring controls at all important stages of the production process. These controls or jobs of quality control can be classified as:

  • New-design control
  • Incoming material control
  • Product control
  • Special process studies.

    Quality is seen as having become the single most important force leading to organisational success and company growth in national and international markets. Further, it is argued that:

    Quality is in its essence a way of managing the organisation and that, like finance and marketing, quality has now become an essential element of modern management.

    Thus a Total Quality System is defined as:

    The agreed company-wide and plantwide operating work structure, documented in effective, integrated technical and managerial procedures, for guiding the co-ordinated actions of the people, the machines and the information of the company and plant in the best and most practical ways to assure customer quality satisfaction and economical costs of quality.

    Operating quality costs are divided into:

  • Prevention costs including quality planning.
  • Appraisal costs including inspection.
  • Internal failure costs including scrap and rework.
  • External failure costs including warranty costs, complaints etc.

    Reductions in operating quality costs result from setting up a total quality system for two reasons:

  • Lack of existing effective customer-orientated customer standards may mean current quality of products is not optimal given use
  • Expenditure on prevention costs can lead to a severalfold reduction in internal and external failure costs.

    The new 40th Anniversary edition of Dr A V Feigenbaum's book, Total Quality Control, now further defines TQC for the 1990s in the form of ten crucial benchmarks for total quality success. These are that:

  • Quality is a company-wide process.
  • Quality is what the customer says it is.
  • Quality and cost are a sum, not a difference.
  • Quality requires both individual and team zealotry.
  • Quality is a way of managing.
  • Quality and innovation are mutually dependent.
  • Quality is an ethic.
  • Quality requires continuous improvement.
  • Quality is the most cost-effective, least capital-intensive route to productivity.
  • Quality is implemented with a total system connected with customers and suppliers.

    These are the ten benchmarks for total quality in the 1990s. They make quality a way of totally focusing the company on the customer - whether it be the end user or the man or woman at the next work station or next desk. Most importantly, they provide the company with foundation points for successful implementation of its international quality leadership.

  • Posted On Monday, October 27, 2003 8:12 PM | Filed Under [ Quality ]

    Manual or Automated?

    Summary:Automated test tools are powerful aids to improving the return on the testing investment when used wisely. Some tests inherently require an automated approach to be effective, but others must be manual. In addition, automated testing projects that fail are expensive and politically dangerous. How can we recognize whether to automate a test or run it manually, and how much money should we spend on a test?

    When Test Automation Makes Sense

    Let’s start with the tests that ideally are automated. These include:

    • Regression and confirmation. Rerunning a test against a new release to ensure that behavior remains unbroken—or to confirm that a bug fix did indeed fix the underlying problem—is a perfect fit for automated testing. The business case for test automation outlined in Software Test Automation by Mark Fewster and Dorothy Graham is built around this kind of testing.

  • Monkey (or random). Tests that fire large amounts or long sequences of data, transactions, or other inputs at a system in a random search for errors are easily and profitably automated

  • Load, volume, and capacity. Sometimes, systems must support tremendous loads. On one project, we had to test how the system would respond to 50,000 simultaneous users, which ruled out manual testing! Two Linux systems running custom load-generating programs filled the bill.

  • Performance and reliability. With the rise of Web-based systems, more and more automated testing is aimed at looking for slow or flaky behavior on Web systems.

  • Structural, especially API-based unit, component, and integration. Most structural testing involves harnesses of some sort, which brings you most of the way into automation. Again, the article I wrote with Greg Kubaczkowski, "Mission Made Possible" (STQE magazine, July/Aug. 2002), provides an example.

    Other tests that are well-suited for automation exist, such as the static testing of complexity and code standards compliance that I mentioned in the previous article. In general, automated tests have higher upfront costs—tools, test development, environments, and so forth—and lower costs to repeat the test.

    When to Focus on Manual Testing


    • High per-test or maintenance costs are one indicator that a test should be done manually. Another is the need for human judgment to assess the correctness of the result or extensive, ongoing human intervention to keep the test running. For these reasons, the following tests are a good fit for manual testing:

  • Installation, setup, operations, and maintenance. In many cases, these tests involve loading CD-ROMs and tapes, changing hardware, and other ongoing hand-holding by the tester.

  • Configuration and compatibility. Like operations and maintenance testing, these tests require reconfiguring systems and networks, installing software and hardware, and so forth, all requiring human intervention.

  • Error handling and recovery. Again, the need to force errors—by powering off a server, for example—means that people must stay engaged during test execution.

  • Localization. Only a human tester with appropriate skills can decide whether a translation makes no sense, is culturally offensive, or is otherwise inappropriate. (Currency, date, and time testing can be automated, but the need to rerun these tests for regression is limited.)

  • Usability. As with localization, human judgment is needed to check for problems with the facility, simplicity, and elegance of the user interface and workflows.

  • Documentation and help. Like usability and localization, checking documentation requires human judgment.

    Wildcards

    In some cases, tests can be done manually, be automated, or both.


    • Functional. Functionality testing can often be automated, and automated functional testing is often part of an effort to create a regression test suite or smoke test. However, it makes sense to get the testing process under control manually before trying to automate functional testing. In addition, you’ll want to keep some of the testing manual.

  • Use cases (user scenarios). By stringing together functional tests into workflows, you can create realistic user scenarios, whether manual or automated. The trick here is to avoid automation if many workflows involve human intervention.

  • User interface. Basic testing of the user interface can be automated, but beware of frequent or extensive changes to the user interface that can incur high maintenance costs for your automated suite.

  • Date and time handling. If the test system can reset the computer’s clocks automatically, then you can automate these tests.

    Higher per-test costs and needs for human skills, judgment, and interaction push towards manual testing. A need to repeat tests many times or reduce the cycle time for test execution pushes towards automated testing.

    Reasons to Be Careful with Automation

    Automated testing is a huge investment, one of the biggest that organizations make in testing. Tool licenses can easily hit six or seven figures. Neophytes can’t use most of these tools—regardless of what any glossy test tool brochure says—so training, consulting, and expert contractors can cost more than the tools themselves. Then there’s maintenance of the test scripts, which generally is more difficult and time consuming than maintaining manual test cases.

  • Posted On Monday, October 27, 2003 8:27 PM | Filed Under [ Automated Testing Software Testing ]

    Tuesday, November 4, 2003

    The Product Quality Measures

    1. Customer satisfaction index
    (Quality ultimately is measured in terms of customer satisfaction.)
    Surveyed before product delivery and after product delivery
    (and on-going on a periodic basis, using standard questionnaires)
    Number of system enhancement requests per year
    Number of maintenance fix requests per year
    User friendliness: call volume to customer service hotline
    User friendliness: training time per new user
    Number of product recalls or fix releases (software vendors)
    Number of production re-runs (in-house information systems groups)

    2. Delivered defect quantities
    Normalized per function point (or per LOC)
    At product delivery (first 3 months or first year of operation)
    Ongoing (per year of operation)
    By level of severity
    By category or cause, e.g.: requirements defect, design defect, code defect,
    documentation/on-line help defect, defect introduced by fixes, etc.

    3. Responsiveness (turnaround time) to users
    Turnaround time for defect fixes, by level of severity
    Time for minor vs. major enhancements; actual vs. planned elapsed time

    4. Product volatility
    Ratio of maintenance fixes (to repair the system & bring it into
    compliance with specifications), vs. enhancement requests
    (requests by users to enhance or change functionality)

    5. Defect ratios
    Defects found after product delivery per function point
    Defects found after product delivery per LOC
    Pre-delivery defects: annual post-delivery defects
    Defects per function point of the system modifications

    6. Defect removal efficiency
    Number of post-release defects (found by clients in field operation),
    categorized by level of severity
    Ratio of defects found internally prior to release (via inspections and testing),
    as a percentage of all defects
    All defects include defects found internally plus externally (by
    customers) in the first year after product delivery

    7. Complexity of delivered product
    McCabe's cyclomatic complexity counts across the system
    Halstead’s measure
    Card's design complexity measures
    Predicted defects and maintenance costs, based on complexity measures

    8. Test coverage
    Breadth of functional coverage
    Percentage of paths, branches or conditions that were actually tested
    Percentage by criticality level: perceived level of risk of paths
    The ratio of the number of detected faults to the number of predicted faults.

    9. Cost of defects
    Business losses per defect that occurs during operation
    Business interruption costs; costs of work-arounds
    Lost sales and lost goodwill
    Litigation costs resulting from defects
    Annual maintenance cost (per function point)
    Annual operating cost (per function point)
    Measurable damage to your boss's career

    10. Costs of quality activities
    Costs of reviews, inspections and preventive measures
    Costs of test planning and preparation
    Costs of test execution, defect tracking, version and change control
    Costs of diagnostics, debugging and fixing
    Costs of tools and tool support
    Costs of test case library maintenance
    Costs of testing & QA education associated with the product
    Costs of monitoring and oversight by the QA organization
    (if separate from the development and test organizations)

    11. Re-work
    Re-work effort (hours, as a percentage of the original coding hours)
    Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
    Re-worked software components (as a percentage of the total delivered components)

    12. Reliability
    Availability (percentage of time a system is available, versus the time
    the system is needed to be available)
    Mean time between failure (MTBF)
    Mean time to repair (MTTR)
    Reliability ratio (MTBF / MTTR)
    Number of product recalls or fix releases
    Number of production re-runs as a ratio of production runs

    Posted On Tuesday, November 4, 2003 1:51 AM | Filed Under [ Quality Metrics ]

    Three Questions About Each Bug You Find

    1. Is this mistake somewhere else also?

    2. What next bug is hidden behind this one?

    3. What should I do to prevent bugs like this?

    Posted On Tuesday, November 4, 2003 1:54 AM | Filed Under [ Software Testing Testing/QA-FAQ ]

    Metrics for evaluating application system testing

    Metric = Formula

    Test Coverage = Number of units (KLOC/FP) tested / total size of the system
    Number of tests per unit size = Number of test cases per KLOC/FP
    Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
    Defects per size = Defects detected / system size
    Test cost (in %) = Cost of testing / total cost *100
    Cost to locate defect = Cost of testing / the number of defects located
    Achieving Budget = Actual cost of testing / Budgeted cost of testing
    Defects detected in testing = Defects detected in testing / total system defects
    Defects detected in production = Defects detected in production/system size
    Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
    Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
    System complaints = Number of third party complaints / number of transactions processed
    Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
    Source Code Analysis = Number of source code statements changed / total number of tests.
    Effort Productivity =
    Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
    Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

    Posted On Tuesday, November 4, 2003 4:31 AM | Filed Under [ Software Testing Quality Metrics ]

    Powered by: