Wednesday, October 23, 2013

Software Test Metrics

When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science.

Why We need Metrics?

1.     “You cannot improve what you cannot measure.”

2.     “You cannot control what you cannot measure”

And Test Metrics Helps In

·         Take decision for next phase of activities

·         Evidence of the claim or prediction

·         Understand the type of improvement required

·         Take decision on process or technology change

 

Type of Metrics

1.     Base Metrics (Direct Measure): Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort.  These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.

Ex: # of Test Cases, # of Test Cases Executed

2.     Calculated Metrics (Indirect Measure): Calculated Metrics convert the Base Metrics data into more useful information.  These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).

Ex: % Complete, % Test Coverage

 

 Base Metrics & Test Phases

          # of Test Cases (Test Development Phase)      

          # of Test Cases Executed (Test Execution Phase)           

          # of Test Cases Passed (Test Execution Phase)                              

          # of Test Cases Failed (Test Execution Phase)                                

          # of Test Cases Under Investigation (Test Development Phase)    

          # of Test Cases Blocked (Test dev/execution Phase)      

          # of Test Cases Re-executed (Regression Phase)           

          # of First Run Failures (Test Execution Phase)

          Total Executions (Test Reporting Phase)

          Total Passes (Test Reporting Phase)

          Total Failures (Test Reporting Phase)

          Test Case Execution Time ((Test Reporting Phase)

          Test Execution Time (Test Reporting Phase)

Calculated Metrics & Phases

The below metrics are created at Test Reporting Phase or Post test Analysis phase

          % Complete          

          % Defects Corrected

          % Test Coverage  

          % Rework

          % Test Cases Passed          

          % Test Effectiveness

          % Test Cases Blocked         

          % Test Efficiency

          1st Run Fail Rate  

          Defect Discovery Rate

          Overall Fail Rate

 

 

Test Plan Coverage on Functionality

Total number of requirement v/s number of requirements covered through test scripts.

         (No of Requirements Covered /Total Number of Requirements) * 100

Define requirements at the time of Effort estimation

Example: Total Number of Requirements estimated are 46, Total Number of Requirements Tested 39, Blocked 7…define what is the coverage?

Note: Define Requirement clearly at Project Level

 

Test Case Defect Density

Total number of errors found in test scripts v/s developed and executed.

          (Defective Test Scripts  /Total Test Scripts Executed) * 100

Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215

So, Test Case Defect Density= (215/1280)*100=>16.8

This 16.8% value can also be called as test case efficiency %, which is depends upon total number of test cases which uncovered defects

 

Defect Slippage Ratio

Number of defects slipped (reported from production) v/s number of defects reported during execution.

          Number of Defects Slipped / (Number of Defects Raised - Number of Defects Withdrawn)

Example: Customer filed defects are 21, total defect found while testing are 267, total number of invalid defects are 17

So, Slippage Ratio=[21/(267-17) ] X 100 => 8.4%

Requirement Volatility

Number of requirements agreed v/s number of requirements changed.

          (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements

          Ensure that the requirements are normalized or defined properly while estimating

Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements

So, Requirement Volatility=(7 + 3 + 11) * 100/67 => 31.34%

Means almost 1/3 of the requirement changed after initial identification

Review Efficiency

The Review Efficiency is a metric that offers insight on the review quality and testing

Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing

Review efficiency=100*Total number of defects found by reviews/Total number of project defects

Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid

So, Review Efficiency =[269/(269+476)] X 100 => 36.1%

Efficiency and Effectiveness of Processes

          Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer.

          Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered

 

Metrics for Software Testing


Defect Removal Efficiency

The defect removal efficiency (DRE) gives a measure of the development team ability to remove defects prior to release. It is calculated as a ratio of defects resolved to total number of defects found. It is typically measured prior and at the moment of release.

Calculation

To be able to calculate that metric, it is important that in your defect tracking system you track:

Affected Version, Version of Software in which this defect was found.

Release Date, Date when version was released

DRE = Number of defects resolved by the development team / total number of defects at the moment of measurement.

DRE is typically measured at the moment of version release, the best visualization is just to show current value of DRE as a number. 

Example :

For example, suppose that 100 defects were found during QA/testing stage and 84 defects were resolved by the development team at the moment of measurement. The DRE would be calculated as 84 divided by 100 = 84%

 

Efficiency of Testing Process (Define size in KLoC or FP, Req.)

Testing Efficiency= Size of Software Tested/Resources used

Test Case Writing Efficiency = No. of Test cases Written/Time.

 

 



 

No comments:

Post a Comment