Wednesday, October 2, 2013

Types of Software Testing


 


Static testing refers to testing something that’s not running. It is examining and reviewing it. The specification is a document and not an executing program, so it’s considered as static. It’s also something that was created using written or graphical documents or a combination of both.

 


Informal reviews:-Informal reviews are applied many times during the early stages of the life cycle of the document. A two person team can conduct an informal review. In later stages these reviews often involve more people and a meeting. The goal is to keep the author and to improve the quality of the document. The most important thing to keep in mind about the informal reviews is that they are not documented.

Technical Review:-It is less formal which is led by the trained moderator but can also be led by a technical expert. It is often performed as a peer review without management participation Defects are found by the experts (such as architects, designers, key users) who focus on the content of the document.

Walkthrough :- In software engineering, a walkthrough or walk-through is a form of software peer review "in which a designer or programmer leads members of the development team and other interested parties through a software product, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.

Inspection:-Inspection in software engineering is the most formal review type of any work product by trained individuals who look for defects using a well defined process



 


Dynamic testing may begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). Typical techniques for this are either using stubs/drivers or execution from a debugger environment. For example, spreadsheet programs are, by their very nature, tested to a large extent interactively ("on the fly"), with results displayed immediately after each calculation or text manipulation.      

 


 

Specification-based (black-box) testing techniques

 

The first of the dynamic testing techniques we will look at are the specification-based testing techniques. These are also known as 'black-box' or input/output-driven testing techniques because they view the software as a black-box with inputs and outputs, but they have no knowledge of how the system or Component is structured inside the box. In essence, the tester is concentrating on what the software does, not how it does it.

Notice that the definition mentions functional and non -functional testing. Functional testing is concerned with what the system does its features or functions. Non-functional testing is concerned with examining how well the system does something, rather than what it does. Non-functional aspects (also known as quality characteristics or quality attributes) include performance, usability, portability, maintainability, etc.

 

The four specification-based techniques we will cover in detail are:

 

Equivalence Partitioning

Boundary Value Analysis

Decision Tables

State Transition Testing.

 

 

Note that we will discuss the first two together, because they are closely related. Equivalence partitioning and boundary value analysis

 

Equivalence Partitioning:

In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements. In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.

E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.

Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.

So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.

 

Boundary Value Analysis:

Boundary value analysis (BVA) is based on testing at the boundaries between partitions. If you have ever done 'range checking', you were probably using the boundary value analysis technique, even if you weren't aware of it. Note that we have both valid boundaries (in the valid partitions) and invalid boundaries (in the invalid partitions).

As an example, consider a printer that has an input option of the number of

To apply boundary value analysis, we will take the minimum and maximum (boundary) values from the valid partition (1 and 99 in this case) together with copies to be made, from 1 to 99.

 

Decision Table Testing

 

The techniques of equivalence partitioning and boundary value analysis are often applied to specific situations or inputs. However, if different combinationsof inputs result in different actions being taken, this can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification-based techniques, decision tables and state transition testing are more focused on business logic or business rules.

 

A decision table is a good way to deal with combinations of things (e.g. inputs). This technique is sometimes also referred to as a 'cause-effect' table. The reason for this is that there is an associated logic diagramming technique called 'cause-effect graphing' which was sometimes used to help derive the decision table.

 

 

Decision tables aid the systematic selection of effective test cases and can have the beneficial side-effect of finding problems and ambiguities in the specification. It is a technique that works well in conjunction with equivalence partitioning. The combination of conditions explored may be combinations of equivalence partitions.

 

Credit card worked example

 

Let's look at another example. If you are a new customer opening a credit card account, you will get a 15% discount on all your purchases today. If you are an existing customer and you hold a loyalty card, you get a 10% discount. If you have a coupon, you can get 20% off today (but it can't be used with the 'new customer' discount). Discount amounts are added, if applicable.


As shown in Table

 
 

 

 

 
In Table, the conditions and actions are listed in the left hand column. All the other columns in the decision table each represent a separate rule, one for each combination of conditions. We may choose to test each rule/combination and if there are only a few this will usually be the case. However, if the number of rules/combinations is large we are more likely to sample them by selecting a rich subset for testing.

 

State Transition Testing:

 

State transition testing is used where some aspect of the system can bedescribed in what is called a 'finite state machine'. This simply means that the system can be in a (finite) number of different states, and the transitions from one state to another are determined by the rules of the 'machine'. This is the model on which the system and the tests are based. Any system where you get a different output for the same input, depending on what has happened before, is a finite state system.

 

For example, if you request to withdraw $100 from a bank ATM, you may be given cash. Later you may make exactly the same request but be refused the money (because your balance is insufficient). This later refusal is because the state of your bank account has changed from having sufficient funds to cover

the withdrawal to having insufficient funds. The transaction that caused your account to change its state was probably the earlier withdrawal. A state diagram can represent a model from the point of view of the system, the account or the customer.

 

Use Case Testing

 

Use case testing is a technique that helps us identify test cases that exercise the whole system on a transaction by transaction basis from start to finish.

A use case is a description of a particular use of the system by an actor (a user of the system). Each use case describes the interactions the actor has with the system in order to achieve a specific task (or, at least, produce something of value to the user). Actors are generally people but they may also be other systems. Use cases are a sequence of steps that describe the interactions between the actor and the system.

 

Use cases are defined in terms of the actor, not the system, describing what the actor does and what the actor sees rather than what inputs the system expects and what the system ‘outputs. They often use the language and terms of the business rather than technical terms, especially when the actor is a business user.

 

 

Structure-Based (White-Box) Testing Techniques

Structure-based testing techniques (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are commonly called 'white-box' or 'glass-box' techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software

Statement Coverage and Statement Testing

The statement coverage is also known as line coverage or segment coverage. The statement coverage covers only the true conditions. Through statement coverage we can identify the statements executed and where the code is not executed because of blockage. In this process each and every line of code needs to be checked and executed.                           

The statement coverage can be calculated as shown below
Statement Coverage=Number of Statements exercised/Total number of statements *100%

To understand the statement coverage in a better way let us take an example which is basically a pseudo-code. It is not any specific programming language, but should be readable and understandable to you, even if you have not done any programming yourself.

Consider Code Sample (a):

READ X

READ Y

I F X>Y THEN Z = 0

ENDIF

Code Sample (a)

 

To achieve 100% statement coverage of this code segment just one test case is required, one which ensures that variable A contains a value that is greater than the value of variable Y, for example, X = 12 and Y = 10. Note that here we are doing structural test design first, since we are choosing our input values in order ensure statement coverage.

 

Now, let’s take another example where we will measure the coverage first. In order to simplify the example, we will regard each line as a statement. A statement may be on a single line, or it may be spread over several lines. One line may contain more than one statement, just one statement, or only part of a statement. Some statements can contain other statements inside them. In code sample 4.2, we have two read statements, one assignment statement, and then one IF statement on three lines, but the IF statement contains another statement (print) as part of it.

 

1 READ X

2 READ Y

3 Z =X + 2*Y

4 IF Z> 50 THEN

5 PRINT large Z

6 ENDIF

Code Sample (b)

 

Although it isn’t completely correct, we have numbered each line and will regard each line as a statement. Let’s analyze the coverage of a set of tests on our six-statement program:

 

TEST SET 1

Test 1_1: X= 2, Y = 3

Test 1_2: X =0, Y = 25

Test 1_3: X =47, Y = 1

 

Which statements have we covered?

 

In Test 1_1, the value of Z will be 8, so we will cover the statements on lines 1 to 4 and   line 6.

In Test 1_2, the value of Z will be 50, so we will cover exactly the same statements as Test 1_1.

In Test 1_3, the value of Z will be 49, so again we will cover the same statements.

Since we have covered five out of six statements, we have 83% statement coverage (with three tests). What test would we need in order to cover statement 5, the one statement that we haven’t exercised yet? How about this one:

 

Test 1_4: X = 20, Y = 25

 

This time the value of Z is 70, so we will print ‘Large Z’ and we will have exercised all six of the statements, so now statement coverage = 100%. Notice that we measured coverage first, and then designed a test to cover the statement that we had not yet covered.

 

Note that Test 1_4 on its own is more effective which helps in achieving 100% statement coverage, than the first three tests together. Just taking Test 1_4 on its own is also more efficient than the set of four tests, since it has used only one test instead of four. Being more effective and more efficient is the mark of a good test technique.

 

Decision Coverage and Decision Testing

 

A decision is an IF statement, a loop control statement (e.g. DO-WHILE or REPEAT-UNTIL), or a CASE statement, where there are two or more possible exits or outcomes from the statement. With an IF statement, the exit can either be TRUE or FALSE, depending on the value of the logical condition that comes after IF. With a loop control statement, the outcome is either to perform the code within the loop or not - again a True or False exit. Decision coverage is calculated by:

 

Decision Coverage=Number of decision outcomes exercised/ Total number of decision outcomes*100

 

What feels like reasonably thorough functional testing may achieve only 40% to 60% decision coverage. Typical ad hoc testing may cover only 20% of the decisions, leaving 80% of the possible outcomes untested. Even if your testing seems reasonably thorough from a functional or specification-based perspective, you may have only covered two-thirds or three-quarters of the decisions. Decision coverage is stronger than statement coverage. It 'sub-sums' statement coverage - this means that 100% decision coverage always guarantees 100% statement coverage. Any stronger coverage measure may require more test cases to achieve 100% coverage. For example, consider code sample 4.1 again.

 

We saw earlier that just one test case was required to achieve 100% statement coverage. However, decision coverage requires each decision to have had both a True and False outcome. Therefore, to achieve 100% decision coverage, a second test case is necessary where A is less than or equal to B. This will ensure that the decision statement 'IF A > B' has a False outcome. So one test is sufficient for 100% statement coverage, but two tests are needed for 100% decision coverage. Note that 100% decision coverage guarantees 100% statement coverage, but not the other way around!

 

1. READ A

2. READ B

3. C = A - 2 * B

4. IFC <0THEN

5. PRINT "C negative"

6. ENDIF

 

 

Code Sample (c)

 

Let's suppose that we already have the following test, which gives us 100% statement coverage for Code Sample (c)

 

TEST SET 2

 



Test 2_1: A = 20, B = 15

 
 
                                                Control Flow Diagram of Code Sample (c)
 

This now covers both of the decision outcomes, True (with Test 2_1) and False (with Test 2_2). If we were to draw the path taken by Test 2_2, it would be a straight line from the read statement down the false exit and through the ENDIF. Note that we could have chosen other numbers to achieve either the True or False outcomes.

 

Experience-Based Testing Techniques

 

In experience-based techniques, people's knowledge, skills and background are a prime contributor to the test conditions and test cases. The experience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. Due to previous experience with similar systems, they may have insights into what could go wrong, which is very useful for testing.

 

Error Guessing

 

Error guessing is a technique that should always be used as a complement together more formal techniques. The success of error guessing is very much dependent on the skill of the tester, as good testers know where the defects are most likely to lurk. Some people seem to be naturally good at testing and others are good testers because they have a lot of experience either as a tester or working with a particular system and so are able to pin-point its weaknesses. This is why an error-guessing approach, used after more formal techniques have been applied to some extent, can be very effective. In using more formal techniques, the tester is likely to gain a better understanding of the system, what it does and how it works. With this better understanding, he or she is likely to be better at guessing ways in which the system may not work properly.

 

There are no rules for error guessing. The tester is encouraged to think of situations in which the software may not be able to cope. Typical conditions to try include division by zero, blank (or no) input, empty files and the wrong kind of data (e.g. alphabetic characters where numeric are required). If anyone ever says of a system or the environment in which it is to operate 'That could never happen', it might be a good idea to test that condition, as such assumptions about what will and will not happen in the live environment are often the cause of failures. A structured approach to the error-guessing technique is to list possible defects or failures and to design tests that attempt to produce them. These defect and failure lists can be built based on the tester's own experience or that of other people, available defect and failure data, and from common knowledge about why software fails.

 

 Exploratory Testing

 

Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used.

 

The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.

 

Test logging is undertaken as test execution is performed, documenting the key aspects of what is tested, any defects found and any thoughts about possible further testing. A key aspect of exploratory testing is learning: learning by the tester about the software, its use, its strengths and its weaknesses. As its name implies, exploratory testing is about exploring, finding out about the software, what it does, what it doesn't do, what works and what doesn't work. The tester is constantly making decisions about what to test next and where to spend the (limited) time.

 

This is an approach that is most useful when there are no or poor specifications and when time is severely limited. It can also serve to complement other, more formal testing, helping to establish greater confidence in the software.

 
 

1 comment: