Acceptance
Testing
Alpha and Beta Testing
Alpha testing takes place at developers' sites, and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites, and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called “field testing
Sanity Testing
When there are some minor issues with software and a new build
is obtained after fixing the issues then instead of doing complete regression
testing a sanity is performed on that build. You can say that sanity testing is
a subset of regression testing.
A sanity test is a narrow regression test that focuses
on one or a few areas of functionality. Sanity testing is usually narrow
and deep.
Usability Testing
2. How easy it is to learn the software.
3. How convenient is the software to end user.
Stress testing tests the software with a focus to check that the software does not crashes if the hardware resources (like memory, CPU, Disk Space) are not sufficient.
Load testing tests the software or component with increasing load, number of concurrent users or transactions is increased and the behavior of the system is examined and checked what load can be handled by the software.
Performance Testing Tools
1. IBM Rational Performance Tester
Its a performance testing tool from IBM, it supports load testing for applications such as HTTP, SAP, Siebel etc. It is supported on Windows and Linux.
Confidentiality
Integrity
A measure intended to allow the
receiver to determine that the information which it is providing is correct. Integrity schemes often use some
of the same underlying technologies as confidentiality schemes, but they
usually involve adding additional information to a communication to form the
basis of an algorithmic check rather than the encoding all of the
communication.
Authentication
The process of establishing the
identity of the user. Authentication can take many forms including but not limited to:
passwords, biometrics, radio frequency identification, etc.
Authorization
Availability
Non-repudiation
A measure intended to prevent the
later denial that an action happened, or a communication that took place etc.In communication terms this often
involves the interchange of authentication information combined with some form
of provable time stamp.
The testing which determines the ease of usage of any system or software by end users with disabilities is known as accessibility testing.
Ad-hoc Testing
Agile Testing
Testing done for
the projects which is using agile development methodologies like Xtreme Programming
(XP) and following test first design paradigm is known as agile testing. In
agile testing test driven development is followed in which the development is
considered as a customer of testing.
Mutation Testing or Back to Back Testing
In back to back
testing two or more variants of the software component or system are executed
with the same set of inputs and the outcome of both components is compared and
checked for any discrepancies.
Big Bang Testing
Compliance Testing
Concurrency Testing
Data Driven Testing
In data driven
testing the test input and expected results are stored in a table or
spreadsheet and a single script is executes all the test inputs in the
spreadsheet.
Data Integrity Testing or Database Integrity Testing
Database integrity
testing tests the processes and methods used to access database to make sure
that the processes and methods work as expected and data is not corrupted,
deleted, updated during the access to the database.
Documentation Testing
Exhaustive Testing
Isolation Testing
Overview
Testing generally
involves running a suite of tests on the completed system. Each individual
test, known as a case, exercises a particular operating condition of the user's
environment or features of the system, and will result in a pass or fail, or Boolean
outcome. There is generally no degree of success or failure. The test environment
is usually designed to be identical, or as close as possible, to the
anticipated user's environment, including extremes of such. These test cases
must each be accompanied by test case input data or a formal description of the
operational activities (or both) to be performed—intended to thoroughly
exercise the specific case—and a formal description of the expected results.
Acceptance Tests/Criteria (in Agile
Software Development ) are usually created by business customers and expressed
in a business domain language These are high-level tests to test the
completeness of a user story or stories 'played' during any sprint/iteration.
These tests are created ideally through collaboration between business
customers, business analysts, testers and developers; however the business
customers (product owners) are the primary owners of these tests. As the user
stories pass their acceptance criteria, the business owners can be sure of the
fact that the developers are progressing in the right direction about how the
application was envisaged to work and so it's essential that these tests
include both business logic tests as well as UI validation elements (if need
be).
Acceptance test cards are ideally
created during sprint planning or iteration planning meeting, before
development begins so that the developers have a clear idea of what to develop.
Sometimes (due to bad planning!) Acceptance tests may span multiple stories
(that are not implemented in the same sprint) and there are different ways to
test them out during actual sprints. One popular technique is to mock external
interfaces or data to mimic other stories which might not be played out during
iteration (as those stories may have been relatively lower business priority).
A user story is not considered complete until the Acceptance tests have passed.
Process
The acceptance test suite is run
against the supplied input data or using an acceptance test script to direct
the testers. Then the results obtained are compared with the expected results.
If there is a correct match for every case, the test suite is said to pass. If
not, the system may either be rejected or accepted on conditions previously
agreed between the sponsor and the manufacturer.
The objective is to provide
confidence that the delivered system meets the business requirements of both
sponsors and users. The acceptance phase may also act as the final quality gateway,
where any quality defects not previously detected may be uncovered.
A principal purpose of acceptance
testing is that, once completed
successfully, and provided certain additional (contractually agreed) acceptance
criteria are met, the sponsors will then sign off on the system as satisfying
the contract (previously agreed between sponsor and manufacturer), and deliver
final payment.
User
Acceptance Testing
User
Acceptance Testing (UAT) is a process to obtain confirmation that a system
meets mutually agreed-upon requirements. A Subject
Matter Expert (SME), preferably the owner or
client of the object under test, provides such confirmation after trial or
review. In software
development, UAT is one of the final stages of
a project and often occurs before a client or customer accepts the new system.
Users of the system perform these
tests, which developers derive from the client's contract or the user
requirements specification.
Test-designers draw up formal tests
and devise a range of severity levels. Ideally the designer of the user acceptance tests should not be the creator of
the formal integration and system test cases for the same system. The UAT acts as a final verification
of the required business function and proper functioning of the system,
emulating real-world usage conditions on behalf of the paying client or a
specific large customer. If the software works as intended and without issues
during normal use, one can reasonably extrapolate the same level of stability
in production.
User tests, which are usually
performed by clients or end-users, do not normally focus on identifying simple
problems such as spelling errors and cosmetic problems, nor showstopper defects, such as software crashes; testers and developers previously identify and fix these
issues during earlier unit testing, integration
testing, and system testing phases.
The results of these tests give
confidence to the clients as to how the system will perform in production.
There may also be legal or contractual requirements for acceptance of the system .
Q-UAT
- Quantified User Acceptance Testing
Quantified
User Acceptance Testing (Q-UAT or, more simply, the "Quantified
Approach") is a revised Business Acceptance Testing process which aims to provide a smarter and faster
alternative to the traditional UAT phase] Depth- testing is carried out against business
requirements only at specific planned points in the application or service
under test. A reliance on better quality code-delivery from the
development/build phase is assumed and a complete understanding of the
appropriate business process is a pre-requisite. This methodology - if carried out
correctly - results in a quick turnaround against plan, a decreased number of
test scenarios which are more complex and wider in breadth than traditional UAT
and ultimately the equivalent confidence-level attained via a shorter
delivery-window, allowing products/changes to come to market quicker.
The Q-UAT approach depends on a
"gated" three-dimensional model. The key concepts are:
- Linear Testing
(LT, the 1st dimension)
- Recursive Testing
(RT, the 2nd dimension)
- Adaptive Testing
(AT, the 3rd dimension).
The four “gates"
which conjoin and support the 3-dimensional model act as quality safeguards and
include contemporary testing concepts
such as:
- Internal Consistency Checks (ICS)
- Major Systems/Services Checks (MSC)
- Real-time/Reactive Regression (RTR).
The Quantified Approach was shaped
by the former "guerilla" method of acceptance
testing which was itself a response to testing phases which proved too costly to be
sustainable for many small/medium-scale projects.
Acceptance Testing in Extreme Programming
Acceptance testing is a term used in agile
software development methodologies,
particularly Extreme
Programming, referring to the functional
testing of a user story by the software development team during the implementation
phase.
The customer specifies scenarios
to test when a user story has been correctly implemented. A story can have
one or many Acceptance tests,
whatever it takes to ensure the functionality works. Acceptance tests are black box system tests.
Each Acceptance test represents some
expected result from the system. Customers are responsible for verifying the
correctness of the Acceptance tests
and reviewing test scores to decide which failed tests are of highest
priority. Acceptance tests are also
used as regression tests prior to a production release. A user story is not
considered complete until it has passed its Acceptance
tests. This means that new Acceptance
tests must be created for each iteration or the development team will report
zero progress.
|
Types
of Acceptance Testing
Typical types of acceptance
testing include the following
User Acceptance
Testing
This may include factory acceptance testing, i.e. the testing done by factory users before the factory is moved to its own site, after which site acceptance testing may be performed by the users at the site.
This may include factory acceptance testing, i.e. the testing done by factory users before the factory is moved to its own site, after which site acceptance testing may be performed by the users at the site.
Also
known as operational readiness testing,
this refers to the checking done to a system to ensure that processes and
procedures are in place to allow the system to be used and maintained. This may
include checks done to back-up facilities, procedures for disaster recovery,
training for end users, maintenance procedures, and security procedures.
Contract and Regulation Acceptance
Testing
In
contract acceptance testing, a system is tested
against acceptance criteria as documented in
a contract, before the system is accepted. In regulation acceptance
testing, a system is tested to ensure it meets
governmental, legal and safety standards.
Alpha testing takes place at developers' sites, and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites, and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called “field testing
System
Testing
Testing the behavior of the whole software/system as defined in software requirements specification(SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled.
System testing is done after integration testing is complete.
System testing should test functional and non functional requirements of the
software.Testing the behavior of the whole software/system as defined in software requirements specification(SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled.
Following types of testing should be considered during system
testing cycle. The test types followed in system testing differ from
organization to organization however this list covers some of the main testing
types which need to be covered in system testing.
Sanity testing is done after thorough regression testing is
over, it is done to make sure that any defect fixes or changes after regression
testing does not break the core functionality of the product. It is done towards
the end of the product release phase.
Sanity testing follows narrow and deep approach with detailed
testing of some limited features.
Sanity testing is like doing some specialized testing which is used to find problems in particular functionality.
Sanity testing is done with an intent to verify that end user requirements are met on not.
Sanity tests are mostly non scripted.
Sanity testing is like doing some specialized testing which is used to find problems in particular functionality.
Sanity testing is done with an intent to verify that end user requirements are met on not.
Sanity tests are mostly non scripted.
- A sanity test is usually unscripted.
- A Sanity test is used to determine a small section of
the application is still working after a minor change.
- Sanity testing is a cursory testing, it is performed
whenever a cursory testing is sufficient to prove the application is
functioning according to specifications. This level of testing is a subset
of regression testing.
- Sanity testing is to verify whether requirements are
met or not, checking all features breadth-first.
Smoke Testing
- Smoke testing originated in the hardware testing
practice of turning on a new piece of hardware for the first time and
considering it a success if it does not catch fire and smoke In software
industry, smoke testing is a shallow and wide approach whereby all areas
of the application without getting into too deep, is tested.
- A Smoke test is scripted, either using a written set of
tests or an automated test
- A Smoke test is designed to touch every part of the
application in a cursory way. It’s shallow and wide.
- Smoke testing is conducted to ensure whether the most
crucial functions of a program are working, but not bothering with finer
details. (Such as build verification).
- Smoke testing is normal health check up to a build of
an application before taking it to testing in depth.
Usability means the software's capability to
be learned and understood easily and how attractive it looks to the end user.
Usability
Testing is a black box testing technique.
Usability
Testing tests the following features of the software.
1. How easy it is to use the software.2. How easy it is to learn the software.
3. How convenient is the software to end user.
Stress testing tests the software with a focus to check that the software does not crashes if the hardware resources (like memory, CPU, Disk Space) are not sufficient.
Stress
testing puts the hardware resources under extensive levels of stress in order
to ensure that software is stable in a normal environment.
In
stress testing we load the software with large number of concurrent users/processes
which can not be handled by the systems hardware resources.
Stress
Testing is a type of performance testing and it is a non-functional testing.
Examples:
1.
Stress Test of the CPU
will be done by running software application with 100% load for some days which
will ensure that the software runs properly under normal usage conditions.
2.
Suppose you have some
software which has minimum memory requirement of 512 MB RAM then the software
application is tested on a machine which has 512 MB memory with extensive loads
to find out the system/software behavior.
Load testing tests the software or component with increasing load, number of concurrent users or transactions is increased and the behavior of the system is examined and checked what load can be handled by the software.
The main objective of load testing is to determine the response
time of the software for critical transactions and make sure that they are
within the specified limit.
It is a type of performance testing.
Load Testing is non-functional testing.
Performance Testing is done to determine
the software characteristics like response time, throughput or MIPS (Millions
of instructions per second) at which the system/software operates.
Performance Testing is done by generating
some activity on the system/software, this is done by the performance test
tools available. The tools are used to create different user profiles and
inject different kind of activities on server which replicates the end-user
environments.
The purpose of doing performance testing is
to ensure that the software meets the specified performance criteria, and
figure out which part of the software is causing the software performance go
down.
Performance Testing Tools should have the
following characteristics:
It should generate load on the system which
is tested
It should measure the server response time
It should measure the throughput
It should measure the server response time
It should measure the throughput
Performance Testing Tools
1. IBM Rational Performance Tester
Its a performance testing tool from IBM, it supports load testing for applications such as HTTP, SAP, Siebel etc. It is supported on Windows and Linux.
2.
LoadRunner
LoadRunner is HP's (formerly Mercury's) load/stress testing tool for web and other applications, it supports a wide variety of application environments, platforms, and databases. Large suite of network/app/server monitors to enable performance measurement of each tier/server/component and tracing of bottlenecks.
LoadRunner is HP's (formerly Mercury's) load/stress testing tool for web and other applications, it supports a wide variety of application environments, platforms, and databases. Large suite of network/app/server monitors to enable performance measurement of each tier/server/component and tracing of bottlenecks.
3.
Apache jmeter
Jmeter is Java desktop application from the Apache Software Foundation designed to load test functional behavior and measure performance. This was originally designed for testing Web Applications but has expanded to other test functions; may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can also be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types; can make a graphical analysis of performance or test server/script/object behavior under heavy concurrent load.
Jmeter is Java desktop application from the Apache Software Foundation designed to load test functional behavior and measure performance. This was originally designed for testing Web Applications but has expanded to other test functions; may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can also be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types; can make a graphical analysis of performance or test server/script/object behavior under heavy concurrent load.
4.
DBUnit
Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values.
Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values.
Regression
Testing is done to find out the defects that arise due to code changes made in
existing code like functional enhancements or configuration changes.
The
main intent behind regression testing is to ensure that any code changes made
for software enhancements or configuration changes has not introduced any new
defects in the software.
Anytime
the changes are made to the existing working code, a suite of test cases is
executed to ensure that the new changes have not introduced any bugs in the
software.
It
is necessary to have a regression test suite and execute that suite after every
new version of software is available.
Regression
test suite is the ideal candidate for automation because it needs to be
executed after every new version.
Maintenance
Testing is done on the already deployed software. The deployed software needs
to be enhanced, changed or migrated to other hardware. The Testing done during
this enhancement, change and migration cycle is known as maintenance testing.
Once the software is
deployed in operational environment it needs some maintenance from time to time
in order to avoid system breakdown, most of the banking software systems needs
to be operational 24*7*365. So it is very necessary to do maintenance testing
of software applications.
In maintenance
testing, tester should consider 2 parts.
Any changes made in
software should be tested thoroughly.
The changes made in
software does not affect the existing functionality of the software, so
regression testing is also done.
Why is Maintenance
Testing required
User may need some
more new features in the existing software which requires modifications to be
done in the existing software and these modifications need to be tested.
End user might want to
migrate the software to other latest hardware platform or change the
environment like OS version, Database version etc. which requires testing the
whole application on new platforms and environment.
Security Testing tests the ability of the
system/software to prevent unauthorized access to the resources and data.
Security Testing needs to cover the six
basic security concepts: confidentiality,
integrity, authentication, authorization, availability and non-repudiation.
Confidentiality
A security measure which protects against the disclosure of information
to parties other than the intended recipient that is by no means the only way
of ensuring the security.
Integrity
A measure intended to allow the
receiver to determine that the information which it is providing is correct. Integrity schemes often use some
of the same underlying technologies as confidentiality schemes, but they
usually involve adding additional information to a communication to form the
basis of an algorithmic check rather than the encoding all of the
communication.
Authentication
The process of establishing the
identity of the user. Authentication can take many forms including but not limited to:
passwords, biometrics, radio frequency identification, etc.
Authorization
The process of determining that a requester is allowed to receive a
service or perform an operation. Access control is an example of authorization.
Availability
Assuring information and communications services will be ready for use
when expected. Information must
be kept available to authorized persons when they need it.
Non-repudiation
A measure intended to prevent the
later denial that an action happened, or a communication that took place etc.In communication terms this often
involves the interchange of authentication information combined with some form
of provable time stamp.
The testing which determines the ease of usage of any system or software by end users with disabilities is known as accessibility testing.
No comments:
Post a Comment