Workbench
Below mentioned figure illustrates the workbench for evaluating a test’s effectiveness. The objectives for the assessment should be clearly established; without defined objectives, the measurement process may not be properly directed.
Input
■■ Assess project testing. Evaluate the effectiveness of the testing performed by a project team to reduce defects from the project at an economical cost.
■■ Identify good test practices. Determine which practices used in the test process are the most effective so that those practices can be used by all projects.
■■ Identify poor test practices. Determine which of the practices used by the project team are ineffective so that other projects can be advised not to use those practices.
■■ Identify economical test practices. Determine the characteristics that make testing most economical so that the cost-effectiveness of testing can be improved.
■■ Problems after test. The effectiveness of the test process can be measured by the number of problems it causes. If few problems occur, testing can be considered to be good; if many problems occur, testing can be considered poor.
■■ User reaction. If the user is satisfied with the application system, it can be assumed testing is good; if the user is unhappy with the performance of the application system, testing can be judged poor.
■■ Testing metrics. Criteria are identified that show a high positive correlation to good or bad testing. This correlation or relationship between factors is called a metric. This process is a scientific mathematical approach to the measurement of testing.
■■ Cost of process being tested. The cost to develop a system or install a change, whichever is being tested.
■■ Cost of test. The resources, both people and computer, used to test the new function.
■■ Defects uncovered by testing. The number of defects uncovered as a result of the test.
■■ Defects detected by phase. A breakdown of the previous category for each phase tested to show the effectiveness of the test by system development life cycle (SDLC) phase.
■■ Defects uncovered after test. The number of defects uncovered after the new function is placed into production status.
■■ Cost of testing by phase. The amount of resources consumed for testing by each developmental phase of the SDLC in which testing occurs.
■■ System complaints. Complaints of problems by a third party after the system goes operational.
■■ Quantification of defects. The potential dollar loss associated with each defect had it not been detected.
■■ Who conducted the test. The functional unit to which the individuals conducting the test report.
■■ Quantification of correctness of defect. The cost to correct the application system defect.
■■ Net profit (accounting). Measures the success of the organization in profiting within its field or industry.
■■ Accidents per day (safety). Measures the effectiveness of an organization’s safety program
Below mentioned figure illustrates the workbench for evaluating a test’s effectiveness. The objectives for the assessment should be clearly established; without defined objectives, the measurement process may not be properly directed.
Workbench to evaluate the effectiveness of testing. |
Input
The input to this step should be
the results of conducting software tests. The type of information required
includes but is not limited to:
Number of tests conducted
Resources expended in testing
Test tools used
Defects uncovered
Size of software tested
Days to correct defects
Defects not corrected
Defects uncovered during operation
that were not uncovered during testing
Developmental phase in which
defects were uncovered
Names of defects uncovered
Do
Procedures
Once a decision has been made to
formally assess the effectiveness of testing, an assessment process is needed.
This assessment can be performed by software test, quality assurance or a team
organization to do this step. This assessment process involves the following
seven tasks.
Task
1: Establish Assessment Objectives
Establish the objectives for performing
the assessment. If objectives are not defined, the measurement process may not
be properly directed and thus may not be effective. These objectives include:
■■ Identify test weaknesses. Identify problems within the test
process where the methodology is not effective in
identifying system defects.
■■ Identify the need for new test tools. Determine when the
existing test tools are not effective or efficient as a
basis for acquiring new or improved testing tools.■■ Assess project testing. Evaluate the effectiveness of the testing performed by a project team to reduce defects from the project at an economical cost.
■■ Identify good test practices. Determine which practices used in the test process are the most effective so that those practices can be used by all projects.
■■ Identify poor test practices. Determine which of the practices used by the project team are ineffective so that other projects can be advised not to use those practices.
■■ Identify economical test practices. Determine the characteristics that make testing most economical so that the cost-effectiveness of testing can be improved.
Task
2: Identify What to Measure
Identify the categories of
information needed to accomplish the measurement objectives. The list that
follows offers the five characteristics of application system testing that can be
measured:
1.
Involvement.
Who is involved in testing and to what
extent?
2. Extent
of testing. What areas are covered by testing
and what volume of testing will be performed on those areas?
3.
Resources. How much information services resources, both people and computer,
will be consumed in a test process?
4.
Effectiveness.
How much testing is achieved per unit of
resource?
5.
Assessment.
What is the value of the results received
from the test process?
Task
3: Assign Measurement Responsibility
Make
one group responsible for collecting and assessing testing performance information.
Without a specific accountable individual, there will be no catalyst to ensure
that the data collection and assessment process occurs. The responsibility for
the use of information services resources resides with IT management. However,
they may desire to delegate the responsibility to assess the effectiveness of
the test process to a function within the department. If the information
services departments have a quality assurance function, that delegation should
be made to the quality assurance group. Lacking that function, other candidates
for assigning the responsibility include an information services comptroller,
manager of standards, manager of software support, or the planning manager.
Task 4: Select Evaluation Approach
Evaluate
several approaches that can be used in performing the assessment process. The
one that best matches the managerial style should be selected. The following
are the most common approaches to evaluating the effectiveness of testing.
■■ Judgment. The individual responsible for the assessment evaluates the
test. This is normally an arbitrary assessment and one that is difficult to
justify. However, if the individual is well respected and the judgments
correlate to actual results, the process may work effectively.
■■ Compliance with methodology. Testing can be considered a success when it complies with
well-established guidelines and standards, and a process defect when it does
not■■ Problems after test. The effectiveness of the test process can be measured by the number of problems it causes. If few problems occur, testing can be considered to be good; if many problems occur, testing can be considered poor.
■■ User reaction. If the user is satisfied with the application system, it can be assumed testing is good; if the user is unhappy with the performance of the application system, testing can be judged poor.
■■ Testing metrics. Criteria are identified that show a high positive correlation to good or bad testing. This correlation or relationship between factors is called a metric. This process is a scientific mathematical approach to the measurement of testing.
The
metrics approach is recommended because once established it is easy to use and can
be proven to show a high correlation to effective and ineffective practices. A major
advantage to metrics is that the assessment process can be clearly defined,
will be known to those people who are being assessed, and is specific enough so
that it is easy to determine which testing variables need to be adjusted to
improve the effectiveness, efficiency, and/or economy of the test process.
Task 5: Identify Needed Facts
Identify the facts necessary to
support the approach selected. The metrics approach clearly identifies the type
of data needed for the assessment process. Using the metrics described later in
this chapter, the needed information includes:
■■ Change characteristics. The frequency, size, and type of change
occurring in each system.
■■ Magnitude of system. A measure used to equate testing
information from system to system, the size being a factor used to relate
testing in one application system to another.■■ Cost of process being tested. The cost to develop a system or install a change, whichever is being tested.
■■ Cost of test. The resources, both people and computer, used to test the new function.
■■ Defects uncovered by testing. The number of defects uncovered as a result of the test.
■■ Defects detected by phase. A breakdown of the previous category for each phase tested to show the effectiveness of the test by system development life cycle (SDLC) phase.
■■ Defects uncovered after test. The number of defects uncovered after the new function is placed into production status.
■■ Cost of testing by phase. The amount of resources consumed for testing by each developmental phase of the SDLC in which testing occurs.
■■ System complaints. Complaints of problems by a third party after the system goes operational.
■■ Quantification of defects. The potential dollar loss associated with each defect had it not been detected.
■■ Who conducted the test. The functional unit to which the individuals conducting the test report.
■■ Quantification of correctness of defect. The cost to correct the application system defect.
Task 6: Collect Evaluation Data
Establish
a system to collect and store the needed data in a form suitable for assessment.
This may require a collection mechanism, a storage mechanism, and a method to select
and summarize the information. Wherever possible, utility programs should be used
for this purpose.
Task 7: Assess the Effectiveness of
Testing
Analyze
the raw information in order to draw conclusions about the effectiveness of systems
testing. Using this analysis, the appropriate party can take action. The summarized
results must be output into a form for presentation that provides an assessment
of testing. The judgmental approach normally expresses the assessment in terms of
an opinion of the assessor. The user reaction provides the same type of
assessment
and
normally includes examples that illustrate good or poor testing performance.
The problems and compliance to standards approaches normally express the
assessment in terms of what has or has not happened; for example, there is a
known number of problems, or X standards have been violated in a test process. Metrics assess
testing by quantitatively showing the effectiveness of the test process.
Using
Testing Metrics
Testing metrics are relationships
that show a high positive correlation to that which is being measured. Metrics
are used in almost all disciplines as a basis of performing an assessment of
the effectiveness of some process. Some of the more common assessments familiar
to most people in other disciplines include:
■■ Blood pressure (medicine). Identifies effectiveness of the
heart and can be used to assess the probability of heart attack and stroke.
■■ Student aptitude test (education). Measures a
student’s achievement in high school studies.■■ Net profit (accounting). Measures the success of the organization in profiting within its field or industry.
■■ Accidents per day (safety). Measures the effectiveness of an organization’s safety program
A metric is a mathematical number
that shows a relationship between two variables. For example, the SAT score
used by many colleges to determine whether to accept a student
shows the student’s mastery of topics as compared to the total number of topics
on the examination. And gross profit is a number showing a relationship between
income and the costs associated to produce that income.
The
metric must then be compared to some norm or standard. For example, someone’s
blood pressure is compared to the norm for that person’s age and sex. The metric
by itself is meaningless until it can be compared to some norm. The net profit
metric is expressed as a percent, such as 10 percent net profit. This does not
take on its true meaning until you know that other companies in that industry
are making 20 percent,
10 percent
or 5 percent. Once the norm for the industry is known, then the gross profit metric
takes on more meaning.
The list
that follows briefly explains 34 suggested metrics for evaluating application system
testing:
1.
User participation (user participation test time divided by
total test time). Metric identifies the user involvement in testing.
2.
Instructions coverage (number of instructions exercised versus
total number of instructions). Metric shows the number of instructions in
the program that were executed during the test process.
3.
Number of tests (number of tests versus size of system tested). Metric identifies
the number of tests required to evaluate a unit of information services work.
4.
Paths coverage (number of paths tested versus total number of
paths). Metric
indicates the number of logical paths that were executed during the test
process.
5.
Acceptance criteria tested (acceptance criteria verified versus
total acceptance criteria). Metric identifies the number of user-identified
criteria that were evaluated during the test process.
6.
Test cost (test cost versus total system cost). Metric identifies
the amount of resources used in the development or maintenance process
allocated to testing.
7.
Cost to locate defect (cost of testing versus the number of
defects located in testing). Metric shows the cost to locate a defect.
8.
Achieving budget (anticipated cost of testing versus the actual
cost of testing). Metric determines the effectiveness of using test dollars.
9.
Detected production errors (number of errors detected in
production versus application system size). Metric determines the effectiveness
of system testing in deleting errors from the application prior to it being
placed into production.
10.
Defects uncovered in testing (defects located by testing versus
total system defects). Metric shows the percent of defects that were identified as a
result of testing.
11.
Effectiveness of test to business (loss due to problems versus
total resources processed by the system). Metric shows the effectiveness of
testing in reducing system losses in relationship to the resources controlled
by the system being tested.
12.
Asset value of test (test cost versus assets controlled by
system). Metric
shows the relationship between what is spent for testing as a percent versus
the assets controlled by the system being tested.
13.
Rerun analysis
(rerun hours versus production hours). Metric shows the effectiveness of testing
as a relationship to rerun hours associated with undetected defects.
14.
Abnormal termination analysis (installed changes versus number
of application system abnormal terminations). Metric shows the
effectiveness of testing in reducing system abnormal terminations through
maintenance changes.
15.
Source code analysis (number of source code statements changed
versus the number of tests). Metrics show the efficiency of testing as
a basis of the volume of work being tested.
16.
Test efficiency (number of tests required versus the number of
system errors). Metric shows the efficiency of tests in uncovering errors.
17.
Startup failure (number of program changes versus the number of
failures the first time the changed program is run in production). Metric shows the ability
of the test process to eliminate major defects from the application being tested.
18.
System complaints (system complaints versus number of
transactions processed). Metric shows the effectiveness of testing and reducing
third-party complaints.
19.
Test automation (cost of manual test effort versus total test
cost). Metric
shows the percent of testing performed manually and that performed
automatically.
20.
Requirements phase testing effectiveness (requirements test cost
versus number of errors detected during requirements phase). Metric shows the value
returned for testing during the requirements phase.
21.
Design phase testing effectiveness (design test cost versus
number of errors detected during design phase). Metric shows the
value returned for testing during the design phase.
22.
Program phase testing effectiveness (program test cost versus
number of errors detected during program phase). Metric shows the
value returned for testing during the program phase.
23.
Test phase testing effectiveness (test cost versus number of
errors detected during test phase). Metric shows the value returned for
testing during the test phase.
24.
Installation phase testing effectiveness (installation test cost
versus number of errors detected during installation phase). Metric shows the
value returned for testing during the installation.
25.
Maintenance phase testing effectiveness (maintenance test cost
versus number of errors detected during maintenance phase). Metric shows the
value returned for testing during the maintenance phase.
26.
Defects uncovered in test (defects uncovered versus size of
systems). Metric
shows the number of defects uncovered through testing based on a unit of work.
27.
Untested change problems (number of tested changes versus
problems attributable to those changes). Metric shows the effect of testing
system changes.
28.
Tested change problems (number of tested changes versus problems
attributable to those changes). Metric shows the effect of testing system
changes.
29.
Loss value of test (loss due to problems versus total resources
processed by system). Metric shows the result of testing in reducing losses as
related to the resources processed by the system.
30.
Scale of ten (assessment of testing rated on a scale of ten). Metric shows people’s
assessment of the effectiveness of testing on a scale on which 1 is poor and 10
is outstanding.
31.
Defect removal efficiency (assessment of identifying defects in
the phase in which they occurred). Metric shows the percentage of defects
uncovered by the end of a development phase versus the total number of defects
made in that single phase of development.
32.
Defects made by testers (assesses the ability of testers to
perform test processes in a defect-free manner). Metric shows the
number of defects made by testers as a relationship to the size of the project
in which they are testing.
33.
Achieving schedule
(anticipated completion date for testing versus actual completion date of
testing). Metric
defines the ability of testers to meet their completion schedule or checkpoints
for the test process.
34.
Requirements traceability (monitor requirements throughout the
test process). Metric shows at various points throughout the development
process the percent of requirements moved to the next phase that was correctly
implemented, requirements missing in the next phase, requirements implemented incorrectly
in the next phase, and requirements included in the next phase that that
were not included in the previous phase.
Check
Procedures
"Post-Implementation Analysis Quality Control Checklist" is a quality
control checklist for this step. It is designed so that Yes responses indicate
good test practices, and No responses warrant additional investigation. A Comments
column is provided to explain No responses and to record results of investigation.
The N/A column is used when the checklist item is not applicable to the test
situation.
Output
The bottom line of assessment is
making application system testing more effective. This is performed by a
careful analysis of the results of testing, and then taking action
to correct identified weaknesses.
Facts precede action, and testing in many organizations has suffered from the
lack of facts. Once those facts have been determined, action should be taken.
The measurement first, action
second concept is effective when the measurement process is specific. The
measurement must be able to determine the effect of action. For example, the
metric approach fulfills this requirement in that it shows very specific
relationships. Using this concept if a tester takes action by changing one of
the metric variables, he or she can quickly measure the result of that action.
Changing the variable in one metric
can normally be measured by the change in another metric. For example, if a
tester detects a higher number of defects than desirable after the system goes
operational, he or she should take action. The action taken might be to
increase the number of instructions exercised during testing. Obviously,
this increases test cost with the
hopeful objective of reducing undetected defects prior to operation. If it can
be shown that increasing the number of instructions exercised does, in fact,
reduce the number of defects in an operation system, that action can be considered
desirable and should be extended. On the other hand, if increasing the number
of instructions executed does not reduce the number of defects undetected prior
to production, then those resources have not been used effectively and that
action should be eliminated and another action tried.
Using the measurement/action
approach, the tester can manipulate the variables until the desired result is
achieved. Without the measurement, management can never be sure that intuitive
or judgmental actions are effective. The measurement/action approach works and
should be followed to improve the test process.
Guidelines
For the process of evaluating test
effectiveness to be valuable, testers must recognize that they make defects in
performing the test processes. Testers need to understand the nature of test
defects and be able to name them. For example, a test defect might be preparing
incorrect test data.
Summary
This step concludes the recommended
seven-step testing process. The results of this step will be recommendations to
improve the full seven steps within the testing process. Not only must the
seven testing steps be improved, but the steps taken to improve the
effectiveness of testing also require improvement. The improvement process
begins by first adopting the seven-step process, and continues by customizing
the process to your IT organization’s specific needs. The experience gained
will identify opportunities for improvement. Part Four addresses special testing
needs based on the use of specific technologies and approaches.
Post-Implementation Analysis Quality Control Checklist
|
YES
|
NO
|
N/A
|
COMMENTS
|
1. Does management
support the concept of continuous improvement to test processes?
|
||||
2. Have resources been
allocated to improving the test processes?
|
||||
3. Has a single
individual been appointed responsible for overseeing the improvement of test
processes?
|
||||
4. Have the results of
testing been accumulated over time?
|
||||
5. Do the results of
testing include the types of items identified in the input section of this
chapter?
|
||||
6. Do testers have
adequate tools to summarize, analyze, and report the results of previous
testing?
|
||||
7. Do the results of that
analysis appear reasonable?
|
||||
8. Is the analysis
performed on a regular basis?
|
||||
9. Are the results of the
analysis incorporated into improved test processes?
|
||||
10. Is data maintained so
there can be a determination as to whether those installed improvements do in
fact improve the test processes?
|
No comments:
Post a Comment