Performance Testing
Performance Testing Methodology
According to the Microsoft Developer Network the
Performance Testing Methodology consists of the following activities:
1. Identify the Test Environment. Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations.
2. Identify Performance Acceptance Criteria. Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern.
3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
4. Configure the Test Environment. Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.
5. Implement the Test Design. Develop the performance tests in accordance with the test design.
6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment.
7. Analyze Results, Tune, and Retest. Analyze, Consolidate and share results data. Make a tuning change and retest. Improvement or degradation? Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU.
Types of Performance Testing
The following are the most common types of performance testing for Web applications.
The most common performance concerns related to Web applications are “Will it be fast enough?”, “Will it support all of my clients?”, “What happens if something goes wrong?”, and “What do I need to plan for when I get more customers?”. In casual conversation, most people associate “fast enough” with performance testing, “accommodate the current/expected user base” with load testing, “something going wrong” with stress testing, and “planning for future growth” with capacity testing. Collectively, these risks form the basis for the four key types of performance tests for Web applications.
Summary Matrix of Benefits by Key Performance Test Types
Performance
Testing Process
Performance Testing Process Flow
1. Understand the Process
2. Understand the System and the Project Plan
3. Identify Performance Acceptance Criteria
• Performance Goals
• Performance-Testing Objectives
4. Plan Performance-Testing Activities
5. Design Tests
• Determine Individual User Data and Variances
• Determine the Relative Distribution of Scenarios
• Identify Target Load Levels
6. Configure the Test Environment
7. Implement the Test Design
8. Execute Work Items
9. Report Results and Archive Data
10. Modify the Plan and Gain Approval for Modifications
11. Return to Activity 5
12. Prepare the Final Report
Iterative Performance Testing Activities
This approach can be represented by using the following nine activities:
Definition: Performance testing is in general testing performed to determine
how a system performs in terms of responsiveness and stability under a
particular workload.
Performance Testing is done to determine the
software characteristics like response time, throughput or MIPS (Millions of
instructions per second) at which the system/software operates.
Performance Testing is done by generating some
activity on the system/software, this is done by the performance test tools
available. The tools are used to create different user profiles and inject
different kind of activities on server which replicates the end-user
environments.
The purpose of doing performance testing is to
ensure that the software meets the specified performance criteria, and figure
out which part of the software is causing the software performance go down.
Performance Testing Methodology
Performance Testing Methodology include -
‘Discover’, ‘Plan’, ‘Design’, and Execute, Analyze & Report’ phases with
clear deliverables and activities as detailed below -
1. Identify the Test Environment. Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations.
2. Identify Performance Acceptance Criteria. Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern.
3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
4. Configure the Test Environment. Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.
5. Implement the Test Design. Develop the performance tests in accordance with the test design.
6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment.
7. Analyze Results, Tune, and Retest. Analyze, Consolidate and share results data. Make a tuning change and retest. Improvement or degradation? Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU.
Types of Performance Testing
The following are the most common types of performance testing for Web applications.
Term
|
Purpose
|
Notes
|
Performance test
|
To determine or validate speed, scalability, and/or stability.
|
· A performance test is a technical investigation done to determine or validate the responsiveness, speed, scalability, and/or stability characteristics of the product under test.
|
Load test
|
To verify application behavior under normal and peak load conditions.
|
· Load testing is conducted to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service level agreement (SLA). A load test enables you to measure response times, throughput rates, and resource-utilization levels, and to identify your application’s breaking point, assuming that the breaking point occurs below the peak load condition.
· Endurance testing is a subset of load testing. An endurance test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.
· Endurance testing may be used to calculate Mean Time Between Failure (MTBF), Mean Time To Failure (MTTF), and similar metrics.
|
Stress test
|
To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
|
· The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under extreme load conditions.
· Spike testing is a subset of stress testing. A spike test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.
|
Capacity test
|
To determine how many users and/or transactions a given system will support and still meet performance goals.
|
· Capacity testing is conducted in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels.
· Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.
|
The most common performance concerns related to Web applications are “Will it be fast enough?”, “Will it support all of my clients?”, “What happens if something goes wrong?”, and “What do I need to plan for when I get more customers?”. In casual conversation, most people associate “fast enough” with performance testing, “accommodate the current/expected user base” with load testing, “something going wrong” with stress testing, and “planning for future growth” with capacity testing. Collectively, these risks form the basis for the four key types of performance tests for Web applications.
Summary Matrix of Benefits by Key Performance Test Types
Term
|
Benefits
|
Challenges and Areas Not Addressed
|
Performance test
|
· Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
· Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.
· Identifies mismatches between performance-related expectations and reality.
· Supports tuning, capacity planning, and optimization efforts.
|
· May not detect some functional defects that only appear under load.
· If not carefully designed and validated, may only be indicative of performance characteristics in a very small number of production scenarios.
· Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.
|
Load test
|
· Determines the throughput required to support the anticipated peak production load.
· Determines the adequacy of a hardware environment.
· Evaluates the adequacy of a load balancer.
· Detects concurrency issues.
· Detects functionality errors under load.
· Collects data for scalability and capacity-planning purposes.
· Helps to determine how many users the application can handle before performance is compromised.
· Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.
|
· Is not designed to primarily focus on speed of response.
· Results should only be used for comparison with other related load tests.
|
Stress test
|
· Determines if data can be corrupted by overstressing the system.
· Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness.
· Allows you to establish application-monitoring triggers to warn of impending failures.
· Ensures that security vulnerabilities are not opened up by stressful conditions.
· Determines the side effects of common hardware or supporting application failures.
· Helps to determine what kinds of failures are most valuable to plan for.
|
· Because stress tests are unrealistic by design, some stakeholders may dismiss test results.
· It is often difficult to know how much stress is worth applying.
· It is possible to cause application and/or network failures that may result in significant disruption if not isolated to the test environment.
|
Capacity test
|
· Provides information about how workload can be handled to meet business requirements.
· Provides actual data that capacity planners can use to validate or enhance their models and/or predictions.
· Enables you to conduct various tests to compare capacity-planning models and/or predictions.
· Determines the current usage and capacity of the existing system to aid in capacity planning.
· Provides the usage and capacity trends of the existing system to aid in capacity planning
|
· Capacity model validation tests are complex to create.
· Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value.
|
Although the potential benefits far outweigh
the challenges related to performance testing, uncertainty over the relevance
of the resulting data — based on the sheer impossibility of testing all of the
reasonable combinations of variables, scenarios and situations — makes some
organizations question the value of conducting performance testing at all. In
practice, however, the likelihood of catastrophic performance failures
occurring in a system that has been through reasonable (not even rigorous)
performance testing is dramatically reduced, particularly if the performance
tests are used to help determine what to monitor in production so that the team
will get early warning signs if the application starts drifting toward a
significant performance-related failure.
Performance Testing Tools
1. IBM Rational Performance Tester
Its a performance testing tool from IBM, it supports load testing for applications such as HTTP, SAP, Siebel etc. It is supported on Windows and Linux.
2. LoadRunner
LoadRunner is HP's (formerly Mercury's) load/stress testing tool for web and other applications, it supports a wide variety of application environments, platforms, and databases. Large suite of network/app/server monitors to enable performance measurement of each tier/server/component and tracing of bottlenecks.
3. Apache jmeter
Jmeter is Java desktop application from the Apache Software Foundation designed to load test functional behavior and measure performance. This was originally designed for testing Web Applications but has expanded to other test functions; may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can also be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types; can make a graphical analysis of performance or test server/script/object behavior under heavy concurrent load.
4. DBUnit
Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values.
Performance Testing Tools
1. IBM Rational Performance Tester
Its a performance testing tool from IBM, it supports load testing for applications such as HTTP, SAP, Siebel etc. It is supported on Windows and Linux.
2. LoadRunner
LoadRunner is HP's (formerly Mercury's) load/stress testing tool for web and other applications, it supports a wide variety of application environments, platforms, and databases. Large suite of network/app/server monitors to enable performance measurement of each tier/server/component and tracing of bottlenecks.
3. Apache jmeter
Jmeter is Java desktop application from the Apache Software Foundation designed to load test functional behavior and measure performance. This was originally designed for testing Web Applications but has expanded to other test functions; may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can also be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types; can make a graphical analysis of performance or test server/script/object behavior under heavy concurrent load.
4. DBUnit
Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values.
Characteristics of Performance Testing Tools
It should generate load on the system which is tested
It should measure the server response time
It should measure the throughput
Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test. Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
It should generate load on the system which is tested
It should measure the server response time
It should measure the throughput
Performance testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test. Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
Performance Testing Process Flow
1. Understand the Process
2. Understand the System and the Project Plan
3. Identify Performance Acceptance Criteria
• Performance Goals
• Performance-Testing Objectives
4. Plan Performance-Testing Activities
5. Design Tests
• Determine Individual User Data and Variances
• Determine the Relative Distribution of Scenarios
• Identify Target Load Levels
6. Configure the Test Environment
7. Implement the Test Design
8. Execute Work Items
9. Report Results and Archive Data
10. Modify the Plan and Gain Approval for Modifications
11. Return to Activity 5
12. Prepare the Final Report
Iterative Performance Testing Activities
This approach can be represented by using the following nine activities:
Figure Iterative
Performance Testing Activities
• Activity 1. Understand the Project
Vision and Context. The outcome of this activity is a shared
understanding of the project vision and context.
• Activity 2. Identify Reasons for Testing Performance. Explicitly identify the reasons for performance testing.
• Activity 3. Identify the Value Performance Testing Adds to the Project. Translate the project- and business-level objectives into specific, identifiable, and manageable performance-testing activities.
• Activity 4. Configure the Test Environment. Set up the load-generation tools and the system under test, collectively known as the performance test environment.
• Activity 5. Identify and Coordinate Tasks. Prioritize and coordinate support, resources, and schedules to make the tasks efficient and successful.
• Activity 6. Execute Task(s). Execute the activities for the current iteration.
• Activity 7. Analyze Results and Report. Analyze and share results with the team.
• Activity 8. Revisit Activities 1-3 and Consider Performance Acceptance Criteria. Between iterations, ensure that the foundational information has not changed. Integrate new information such as customer feedback and update the strategy as necessary.
• Activity 9. Reprioritize Tasks. Based on the test results, new information, and the availability of features and components, reprioritize, add to, or delete tasks from the strategy, and then return to activity 5.
• Activity 2. Identify Reasons for Testing Performance. Explicitly identify the reasons for performance testing.
• Activity 3. Identify the Value Performance Testing Adds to the Project. Translate the project- and business-level objectives into specific, identifiable, and manageable performance-testing activities.
• Activity 4. Configure the Test Environment. Set up the load-generation tools and the system under test, collectively known as the performance test environment.
• Activity 5. Identify and Coordinate Tasks. Prioritize and coordinate support, resources, and schedules to make the tasks efficient and successful.
• Activity 6. Execute Task(s). Execute the activities for the current iteration.
• Activity 7. Analyze Results and Report. Analyze and share results with the team.
• Activity 8. Revisit Activities 1-3 and Consider Performance Acceptance Criteria. Between iterations, ensure that the foundational information has not changed. Integrate new information such as customer feedback and update the strategy as necessary.
• Activity 9. Reprioritize Tasks. Based on the test results, new information, and the availability of features and components, reprioritize, add to, or delete tasks from the strategy, and then return to activity 5.
Relationship to Core
Performance Testing Activities
No comments:
Post a Comment