8.1Testing Web Applications

blaredsnottyΤεχνίτη Νοημοσύνη και Ρομποτική

15 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

101 εμφανίσεις

UNIT
-

IV

Chapter VIII



8.1

Testing Web Applications

8.2

Testing the documentation

8.3

Test planning

8.4

Test Management

8.5

Test E
xecution

8.6

Test Reporting

8.7

Check your progress






















8.1 Testing Web Applications


Web app
lication testing is a collection of related activities with a single goal: to uncover
errors in Web App content, function, usability, navigability, perfo
rmance, capacity and
security. To accomplish this, attesting strategy that encompasses both reviews and
executable testing is applied throughout the web engineering process.

Web engineers and other project stakeholders ( managers, customers, end
-
users) all

participate in Web App testing. If end
-
users encounter errors that shake their faith in the
web app, they will go elsewhere for the content and function they need, and the Web App
will fail. For this reason, Web engineers must work to eliminate as many er
rors as
possible before the web app goes on
-
line.

The Web App testing process begins by focusing on user
-
visible aspects of the Web App
and proceeds to tests that exercise technology and infrastructure. Seven testing steps are
performed: content, interface
, navigation, component, configuration, performance and
security testing. A suite of test cases is developed for every testing step and an archive of
test results is maintained for future use.


Testing concepts for Web Applications

T
o
understand the object
ives of testing within a web engineering context, we must
consider the many dimensions of Web App quality. In the context of this discussion, we
consider quality dimensions that are particularly relevant in any discussion of testing for
Web Engineering wo
rk. We also consider the nature of errors that are encountered as a
consequence of testing, and the testing strategy that is applied to uncover the errors.


Dimensions of quality


Quality is incorporated into a Web application as a consequence of good desi
gn. Both
reviews and testing examine one or more of the following quality dimensions:



Content
is evaluated at both a syntactic and semantic level. At the syntactic level,
spelling, punctuation, and grammar are accessed for text
-
based documents. At a sema
ntic
level, correctness, consistency, and lack of ambiguity are all assessed.


Function
is tested to uncover errors that indicate lack of conformance to customer
requirements.


Structure
is assessed to ensure that it properly delivers Web App content and
function.


Usability
is tested to ensure that each category of user is supported by the
interface.


Navigability
is tested to ensure all navigation syntax and semantics are exercised.


Performance
is tested under a variety of operating conditions, configur
ations and
loading.


Compatibility
is tested in the intent to find errors that are specific to a unique host
configuration.


Interoperability
is tested to ensure that the Web App properly interfaces with
other applications and databases.


Security
is teste
d by assessing potential vulnerabilities.


The testing process


The testing process for Web Engineering begins with tests that exercise content and
interface functionality that is immediately visible to end
-
users. The following figure tells
the web app tes
ting process with the design pyramid.












Fig. the testing process










Content
test
ing

tes

Interface

testing

Navigation

testing

Component

testing

Configuration

testing

Performance

testing

Security

testing

Interfae
design

Aesthetic

design

Content design

Navigation
d
esign

Architecture design

Component design

Content testing


Errors in web app content can be as trivial as minor typographical errors or as significant
as incorrect information, imp
roper organization, or violation of intellectual property laws.
Content testing attempts to uncover these and many other problems before the user
encounters them.


Content testing has three important objectives:



To uncover syntactic errors in text based do
cuments, graphical representations,
and other media,



To uncover semantic errors and



To find errors in the organization or structure of content that is presented to the
end user.


Database testing


Modern web applications do much more than the present stat
ic content objects. Web app
interfaces with sophisticated databases and build dynamic content objects that are created
in real
-
time using the data acquired from a database.

To accomplish this,



A large equities database is queried



Relevant data are extracte
d from the database



The extracted data must be organized as a content object, and



This content object is transmitted to the client environment for display.


User interface testing


The o
verall strategy for interface testing is to


Uncover errors related t
o specific interface mechanisms and


Uncover errors in the way interface implements the semantics of navigation, Web
app functionality, or content display.

To accomplish this strategy, a number of objectives must be achieved:



Interface features are tested
to ensure that design rules, aesthetics, and related
visual content are available for the user without error.



Individual interface mechanisms are tested in a manner that is analogous to unit
testing.



Each interface mechanism is tested within the context of
a use case.



The complete interface is tested against selected use
-
cases to uncover errors in the
semantics of the interface.



The interface is tested within a variety of environments to ensure that it will be
compatible.


Component level testing


Component
level testing, also called function testing, focuses on a set of tests that attempt
to uncover errors in Web App functions. Component level test cases are often driven by
forms
-
level input. Once forms data are defined, the user selects a button or other c
ontrol
mechanism to initiate execution. the following test case design methods are typical:




Equivalence partitioning



Boundary value analysis



Path testing

In addition to these test case design methods, a technique called forced error testing is
used to der
ive test cases that purposely drive the web application component into an error
condition. The purpose is to uncover errors that occur during error
-
handling.


Navigation testing


A user travels through a Web App in much the same way as a visitor walks thro
ugh a
store or museum. The job of navigation testing is



To ensure that the mechanisms that allow the Web App user to travel through the
Web App are all functional and



To validate that each navigation semantic unit can be achieved by the appropriate
user c
ategory.

The first phase of navigation testing actually begins during interface testing. Navigation
mechanisms are tested to ensure that each performs its intended function. Each of the
following navigation mechanisms should be tested:



Navigation links



Red
irects



Bookmarks



Frames and framesets



Site maps



Internal search engines

Some of the tests noted can be performed by automated tools while others are designed
and executed manually. The intent throughout is to ensure that errors in navigation
mechanisms are
found before the Web App goes on
-
line. Navigation testing, like
interface and usability testing, should be conducted by as many different constituencies
as possible. Early stages of testing are conducted by Web engineers, but later stages
should be conduc
ted by other project stakeholders, an independent testing team, and
ultimately, by non technical users. The intent is to exercise Web App navigation
thoroughly.


Configuration testing


Configuration variability and instability are important factors that ma
ke web engineering
a challenge. Hardware, operating systems, browsers, storage capacity, network
communication speeds, and variety of other client
-
side factors are difficult to predict for
each user. In addition, the configuration for a given user can chan
ge on a regular basis.
The result can be a client
-
side environment that is prone to errors that are both subtle and
significant.

The job of configuration testing is not to exercise every possible client
-
side
configuration. Rather, it is to test a set of pr
obable client
-
side and server side
configurations to ensure that the user experience will be the same on all of them and to
isolate errors that may be specific to a particular configuration.


Security testing


Web app security is a complex subject that mus
t be fully understood before effective
security testing can be accomplished. Web Apps and the client
-
side and server side
environments in which they are housed represent an attractive target for external hackers,
disgruntled employees, dishonest competitor
s, and anyone else who wishes to steal
sensitive information, maliciously modify content, degrade performance, disable
functionality, or embarrass a person, organization, or business.

Security tests are designed to probe vulnerabilities of the client
-
side
environment, the
network communications that occur as data are passed from client to server and back
again, and the server
-
side environment. Each of these domains can be attacked, and it is
the job of the security tester to uncover weaknesses that can be e
xploited by those with
the intent to do so.

On the client side, vulnerabilities can often be traced to pre
-
existing bugs in browsers, e
-
mail programs, or communication software.

On the server
-
side, vulnerabilities include denial of service attacks and mali
cious scripts
that can be passed along to the client side or used to disable server operations. In
addition, server
-
side databases can be accessed without authorization.

To protect against these vulnerabilities, one or more of the following security eleme
nts is
implemented:


Firewalls

a filtering mechanism that is a combination of hardware and software
that examines each incoming packet of information to ensure that it is coming from a
legitimate source, blocking any data that are suspect.


Authenticatio
n


a verification mechanism that validates the identity of all clients
and servers, allowing communication to occur only when both sides are verified.


Encryption

an encoding mechanism that protects sensitive data by modifying it
in a way that makes it
impossible to read by those with malicious intent. Encryption is
strengthened by using digital certificates that allow the client to verify the destination to
which the data are transmitted.


Authorization


a filtering mechanism that allows access to the
client or server
environment only by those individuals with appropriate authorization codes.

The actual design of security tests requires in
-
depth knowledge of the inner workings
of each security element and a comprehensive understanding of a full rang
e of
networking technologies. In many cases, security testing is outsourced to firms that
specialize in these technologies.


Performance testing


Nothing is more frustrating than a web App that makes minutes to load content when
competitive sites download
similar content in seconds. Nothing is more aggravating than
trying to log
-
in to a Web App and receiving a server
-
busy message, with the suggestion
that you try again later. Nothing is more disconcerting than a Web App that responds
instantly in some situa
tions, and then seems to go into an infinite wait state in other
situations. All of these occurrences happen on the Web every day, and all of them are
performance related.

Performance testing is used to uncover performance problems that can result from lac
k of
server
-
side resources, inappropriate network bandwidth, inadequate database capabilities,
faulty or weak operating system capabilities, poorly designed Web App functionality, and
other hardware or software issues that can lead to degraded client
-
serve
r performance.


The intent is:



To understand how the system responds to loading and



To collect metrics that will lead to design modifications to improve performance.

Performance testing objectives

Performance tests are designed to simulate real
-
world loa
ding situations. As the number
of simultaneous Web App users grows, or the number of on
-
line transactions increases,
or the amount of data increases, performance testing will help answer the following
questions:


Does the server response time degrade to a
point where it is noticeable and
unacceptable?


At what point does performance become unacceptable?


What system components are responsible for performance degradation?


What is the average response time for users under a variety of loading conditions?


Do
es performance degradation have an impact on system security?


Is Web App reliability or accuracy affected as the load on an the system grows?


What happens when loads that are grates than maximum server capacity are
applied?


To develop answers to these q
uestions, two different performance tests are conducted:


Load testing

real world loading is tested at a variety of load levels and in a
variety of combinations.


Stress testing

loading is increased to the breaking point to determine how much
capacity
the Web App environment can handle.


Load testing


The intent of load testing is to determine how the Web App and its server side
environment will respond to various loading conditions. As testing proceeds,
permutations to the following variables define a
set of test conditions:


N

the number of concurrent users


T

the number of on
-
line transactions per user per unit time


D

the data load processed by the server per transaction

As each test condition is run, one or more of the following measures are c
ollected:
average user response, average time to download a standardized unit of data, or average
time to process transaction. Load testing can also be used to assess recommended
connection speeds for users of the Web App. Overall throughput, P is computed
in the
following manner:


P = N * T * D


Stress testing


Stress testing is a continuation of load testing, but in this instance the variables, N, T, and
D are forced to meet and then exceed operational limits. The intent of these tests is to
answer each o
f the following questions:


Does the system degrade “gently” or does the server shut down as capacity is
exceeded?


Does server software generate “server not available” messages? More generally,
are users aware that they cannot reach the server?


Does the
server queue requests for resources and empty the queue once capacity
demands diminish?


Are transactions lost as capacity is exceeded?


Is data integrity affected as capacity is exceeded?


What values of N, T, and D force the server environment to fail?


If the system does fail, how long will it take to come back on
-
line?


Are certain Web App functions discontinued as capacity reaches the 80 or 90
percent level?

A variation of stress testing is sometimes referred to as spike/bounce testing. In this
testing
regime, load is spiked to capacity, then lowered quickly to normal operating
conditions, and then spiked again. By bouncing system loading a tester can determine
how well the server can marshall resources to meet very high demand and then release
them whe
n normal conditions reappear.


8.2
Testing documentation and help facilities



The term software testing conjures images of large numbers of test cases prepared to
exercise computer programs and the data that they manipulate. It is important to note that
t
esting must also extend to the third element of the software configuration


documentation.

Errors in documentation can be as devastating to the acceptance of the program as errors
in data or source code. Nothing is more frustrating than following a user g
uide or an on
-
line help facility exactly and getting results or behaviors that do not coincide with those
predicted by the documentation. It is for this reason that documentation testing should be
meaningful part of every software test plan.

Documentation
testing can be approached in two phases. The first phase, review and
inspection, examines the documentation for editorial clarity. The second phase, live test,
uses the documentation in conjunction with the use of the actual program.


The following questio
ns should be answered during documentation testing:




Does the documentation accurately describe how to accomplish each mode of
use?



Is the description of each interaction sequence accurate?



Are examples accurate?



Are terminology, menu descriptions, and sys
tem responses consistent with the
actual program?



Is it relatively easy to locate guidance within the documentation?



Can troubleshooting be accomplished easily with the documentation?



Are the documentation table of contents and index accurate and complete?



Is the design of the document( layout, typefaces, indentation, graphics)
conductive to understanding and quick assimilation of information?



Are all software error messages displayed for the user described in more detail in
the document? Are actions to be
taken as a consequence of an error message
clearly delineated?



If hypertext links are used, are they accurate and complete?



If hypertext is used, is the navigation design appropriate for the information
required?


The only viable way to answer these questi
ons is to have an independent third party test
the documentation in the context of program usage. All discrepancies are noted and areas
of document ambiguity or weakness are defined for potential rewrite.



8.3 TEST PLANNING


Preparing a test plan



Testin
g

like any project
-
should be driven by a plan. The test plan covers the
following:



What needs to be tested

the scope of testing, including clear
identification of what will be tested and what will not be tested.



How the testing is going to be performe
d



What resources are needed for testing
-
computer as well as human
resources.



The time lines by which the testing activities will be performed.



Risks that may be faced in all the above, with appropriate mitigation and
contingency plans.


Scope management:



One single plan can be prepared to cover all phases or there can be separate plans for
each phase. In situations where there are multiple test plans, there should be one test plan,
which covers the activities common for all plans. This is called the
mast
er test plan
.

Scope management pertains to specifying the scope of a project. For testing, scope
management entails

1.

Understanding what constitutes a release of a product

2.

Breaking down the release into features

3.

Prioritizing the features of testing

4.

Deciding
which features will be tested and which will not be and

5.

Gathering details to prepare for estimation of resources for testing.


Knowing the features and understanding them from the usage perspective will enable the
testing team to prioritize the features f
or testing. The following factors drive choice and
prioritization of features to be tested.


Features that are new and critical for the release


The new features of a release set the expectations of the customers and must perform
properly. These new featur
es result in new program code and thus have a higher
susceptibility and exposure to defects.


Features whose failures can be catastrophic

Regardless of whether a feature is new or not, any feature the failure of which can be
catastrophic has to be high on
the list of features to be tested. For example, recovery
mechanisms in a database will always have to be among the most important features to be
tested.


Features that are expected to be complex to test

Early participation by the testing team can help ide
ntify features that are difficult to test.
This can help in starting the work on these features earl and line up appropriate resources
in time.


Features which are extensions of earlier features that have been defect prone

Defect prone areas need very tho
rough testing so that old defects do not creep in again.


Deciding test approach/strategy

Once we have this prioritized feature list, the next step is to drill down into some more
details of what needs to be tested, to enable estimation of size, effort, a
nd schedule. This
includes identifying



1.

What type of testing would you use for testing the functionality?

2.

What are the configurations or scenarios for testing the features?

3.

What integration testing is followed to ensure these features work together?

4.

What
localization validations would be needed?

5.

What non
-
functional tests would you need to do?


The test approach should result in identifying the right type of test for each of the features
or combinations.


Setting up criteria for testing



There must be cl
ear entry and exit criteria for different phases of testing. Ideally,
tests must be run as early as possible so that the last minute pressure of running tests after
development delays is minimized. The entry criteria for a test specify threshold criteria
f
or each phase or type of test. The completion/exit criteria specify when a test cycle or a
testing activity can be deemed complete.


Suspension criteria specify when a test cycle or a test activity can be suspended.



Identifying responsibilities, staff
ing, and training needs



A testing project requires different people to play different roles. There are roles
for test engineers, test leads, and test managers. The different role definitions should


1. Ensure there is clear accountability for a given tas
k, so that each person knows
what he has to do;


2. Clearly list the responsibilities for various functions to various people


3. Complement each other, ensuring no one steps on an others’ toes; and


4. Supplement each other, so that no task is left unassi
gned.





Staffing is done based on estimation of effort involved and the availability of time
for release. In order to ensure that the right tasks get executed, the features and tasks are
prioritized the basis of an effort, time and importance.


It may n
ot be possible to find the perfect fit between the requirements and
availability of skills, they should be addressed with appropriate training programs.


Identifying resource requirements


As a part of planning for a testing project, the project manager sh
ould provide estimates
for the various hardware and software resources required. Some of the following factors
need to be considered.

1.

Machine configuration needed to run the product under test

2.

Overheads required by the test automation tool, if any

3.

Supporti
ng tools such as compilers, test data generators, configuration
management tools, and so on

4.

The different configurations of the supporting software that must be present

5.

Special requirements for running machine
-
intensive tests such as load tests
and perform
ance tests

6.

Appropriate number of licenses of all the software



Identifying test deliverables




The test plan also identifies the deliverables that should come out of the test cycle/testing
activity. The deliverables include the following,

1.

The test plan i
tself

2.

Test case design specifications

3.

Test cases, including any automation that is specified in the plan

4.

Test logs produced by running the tests

5.

Test summary reports



Testing tasks: size and effort estimation



The scope identified above gives a broad ov
erview of what needs to be tested.
This understanding is quantified in the estimation step. Estimation happens broadly in
three phases.

1.

Size estimation

2.

Effort estimation

3.

Schedule estimation


Size estimation


Size estimate quantifies the actual amount of te
sting that needs to be done. The factors
contribute to the size estimate of a testing project are as follows:


Size of the product under test


Line of Code(LOC), Function Point(FP) are the popular
methods to estimate the size of an application. A somewhat
simpler representation of
application size is the number of screens, reports, or transactions.




Extent of automation required



Number of platforms and inter
-
operability environments to be tested



Productivity data



Reuse opportunities



Robustness of processes


Activity breakdown and scheduling

Activity breakdown and schedule estimation entail translating the effort required into
specific time frames. The following steps make up this translation.



Identifying external and internal dependencies among the activiti
es



Sequencing the activities, based on the expected duration as well as on the
dependencies



Monitoring the progress in terms of time and effort



Rebalancing schedules and resources as necessary


Communications management

Communications management consists o
f evolving and following procedures for
communication that ensure that everyone is kept in sync with the right level of detail.


Risk management

Like every project, testing projects also face risks. Risks are events that could potentially
affect a project’
s outcome. Risk management entails



Identifying the possible risks;



Quantifying the risks;



Planning how to mitigate the risks; and



Responding to risks when they become a reality.













Fig.
-
Aspects of risk management


i)

Risk identificatio
n
consists of identifying the possible risks that may hit a
project. Use of checklists, Use of organizational history and metrics and
informal networking across the industry are the common ways to identify
risks in testing.

ii)

Risk quantification
deals with e
xpressing the risk in numerical terms.
The
probability
of the risk happening and the
impact
of the risk are the
two components to the quantification of risk.

Risk
identification

Risk

response

Risk

Mitigation

planning

Risk

quantificat
ion

iii)

Risk mitigation planning
deals with identifying alternative strategies to
combat a risk event. To
handle the effects of a risk, it is advisable to have
multiple mitigation strategies.

The following are some of the common risks encountered in testing projects:




Unclear requirements,



Schedule dependence,



Insufficient time for testing,



Show stopper def
ects,



Availability of skilled and motivated people for testing and



Inability to get a test automation tool.



8.4 TEST MANAGEMENT


Choice of standards


Standards comprise an important part of planning in any organization. There are two
types of standard
s


external standards and internal standards
.


External standards
are standards that a product should comply with, are externally
visible, and are usually stipulated by external consortia. Compliance to external standards
is usually mandated by external p
arties.


Internal standards
are standards formulated by a testing organization to bring in
consistency and predictability. They standardize the processes and methods of working
within the organization. Some of the internal standards include


Naming and sto
rage conventions for test artifacts


Every test artifact have to be
named appropriately and meaningfully. Such naming conventions should enable
easy
identification of the product functionality
that a set of tests are intended for; and
reverse
mapping
to
identify the functionality corresponding to a given set of tests.


Document standards


Most of the discussion on documentation and coding standards pertain to automated
testing. Documentation standards specify how to capture information about the tests
wi
thin the test scripts themselves. Internal documentation of test scripts are similar to
internal documentation of program code and should include the following:



Appropriate header level comments at he beginning of the file that outlines the
functions to be
served by the test.



Sufficient in
-
line comments, spread throughout the file, explaining the functions
served by the various parts of a test script.



Up
-
to
-
date change history information, recording all the changes made to the test
file.


Test coding standa
rds


Test coding standards go one level deeper into the tests and enforce standards on how the
test themselves are written. The standards may


1.

Enforce the right type of initialization

2.

Stipulate ways of naming variables within the scripts to make sure that

a reader understands consistently the purpose of a variable.

3.

Encourage reusability of test artifacts.

4.

Provide standard interfaces to external entries like operating system,
hardware, and so on.


Test reporting standards


Since testing is tightly interlink
ed with product quality, all the stakeholders must get a
consistent and timely view of the progress of tests. The test reporting provides guidelines
on the level of detail that should be present in the test reports, their standard formats and
contents, rec
ipients of the report, and so on.


Test infrastructure management


Testing requires a robust infrastructure to be planned upfront. This infrastructure is made
up of three essential elements.

1.

A test case database (TCDB )

2.

A defect repository (DR )

3.

Configurat
ion management repository and tool

A test case database captures all the relevant information about the test cases in an
organization.

A
defect repository
captures all the relevant details of defects reported for a product.
Most of the metrics classified
as testing defect metrics and development defect metrics
are derived out of the data in defect repository.

Yet another infrastructure that is required for a software product organization is a
Software Configuration Management (SCM) repository. An SCM repo
sitory keeps track
of change control and version control of all the files that make up a software product.
Change controls ensures that




Changes to test files are made in a controlled fashion and only with proper
approvals.



Changes made by one test engine
er are not accidentally lost or overwritten
by other changes.



Each change produces a distinct version of the file that is recreatable at
any point of time.



At any point of time, everyone gets access to only the most recent version
of the test files.


Versi
on control ensures that the test scripts associated with a given release of a product
are base lined along with the product files.


TCDB, Defect Repository, and SCM repository should complement each other and work
together in an integrated fashion.





















Figure

relationship SCM, DR and TCDB






TCDB


DR


SCM

Test case

Product

XREF

Test case

info

Test case

info

Test case

defect

XREF

Product

Test

cases

Product

Source

code

Environ
-

Ment

files

Product

documen
tation

Defect

details

Defect
fix

details

Defect

C
ommu
-

nication

Defect

Test

details

Test people management


People management is an integral part of any project management. It requires the ability
to hire, motivate and retain
the right people. These skills are seldom formally taught.
Testing projects present several additional challenges. We believe that the success of a
testing organization depends on judicious people management skills.


The important point is that the common
goals and the spirit of teamwork have to
be internalized by all the stakeholders. Such an internalization and upfront team building
has to be part of the planning process for the team to succeed.


Integrating with product release


Ultimately, the success
of a product depends on the effectiveness of integration of the
development and testing activities. These job functions have to work in tight unison
between themselves and with other groups such as product support, product management,
and so on. The sched
ules of testing have to be linked directly to product release. The
following are some of the points to be decided for this planning.




Sync points between development and testing as to when different types of testing
can commence.



Service level agreements
between development and testing as to how long it
would take for the testing team to complete the testing. This will ensure that
testing focuses on finding relevant and important defects only.



Consistent definitions of the various priorities and severities
of the defects.



Communication mechanisms to the documentation group to ensure that the
documentation is kept in sync with the product in terms of known defects,
workarounds and so on.



The purpose of the testing team is to identify the defects in the
product and the risks that
could be faced by releasing the product with the existing defects.


8.5

TEST PROCESS


Putting together and base lining a test plan



A test plan combines all the points discussed above into a single document that acts as an
an
chor point for the entire testing project. An organization normally arrives at a template
that is to be used across the board. Each testing project puts together a test plan based on
the template. The test plan is reviewed by a designated set of competent
people in the
organization. It then is approved by a competent authority, who is independent of the
project manager directly responsible for testing. After this, the test plan is base lined into
the configuration management repository. From then on, the ba
se lined test plan becomes
the basis for running the testing project. In addition, periodically, any change needed to
the test plan templates are discussed among the different stake holders and this is kept
current and applicable to the testing teams.


Tes
t case specification


Using the test plan as the basis, the testing team designs test case specifications, which
then becomes the basis for preparing individual test cases. A test case is a series of steps
executed on a product, using a pre
-
defined set o
f input data, expected to produce a pre
-
defined set of outputs, in a given environment. Hence, a test case specification should
clearly identify,




The purpose of the test: this lists what feature or part the test is intended for.



Items being tested, along
with their version/release numbers as appropriate.



Environment that needs to be set up for running the test case.



Input data to be used for the test case.



Steps to be followed to execute the test



The expected results that are considered to be correct resul
ts



A step to compare the actual results produced with the expected results



Any relationship between this and other tests



Update of traceability matrix


A traceability matrix is a tool to validate that every requirement is tested. This matrix is
created
during the requirements gathering phase itself by filling up the unique identifier
for each requirement. When a test case specification is complete, the row corresponding
to the requirement which is being tested by the test case is updated with the test ca
se
specification identifier. This ensures that there is a two
-
way mapping between
requirements and test cases.


Identifying possible candidates for automation


Before writing the test cases, a decision should be taken as to which tests are to be
automated
and which should be run manually. Some of the criteria that will be used in
deciding which scripts to automate include



Repetitive nature of the test



Effort involved in automation



Amount of manual intervention required for the test, and



Cost of automation t
ool.


Developing and base lining test cases


Based on the test case specifications and the choice of candidates for automation, test
cases have to be developed. The test cases should also have change history
documentation, which specifies



What was the chan
ge



Why the change was necessitated



Who made the change



When was the change made



A brief description of how the change has been implemented and



Other files affected by the change


All the artifacts of test cases

the test scripts, inputs, scripts, expected
outputs, and so on
should be stored in the test case database and SCM.


Executing test cases and keeping traceability matrix current



The prepared test cases have to be executed at the appropriate times during a
project. For example, test cases correspo
nding to smoke tests may be run on a daily basis.
System testing test cases will be run during system testing.

As the test cases are executed during a test cycle, the defect repository is updated with

1.

Defects from the earlier test cycles that are fixed in
the current build and

2.

New defects that get uncovered in the current run of the tests.


During test design and execution, the traceability matrix should be kept current. When
tests get designed and executed successfully, the traceability matrix should be u
pdated.


Collecting and analyzing metrics


When tests are executed, information about the test execution gets collected in test logs
and other files. The basic measurements from running the tests are then converted to
meaningful metrics by the use of appro
priate transformations and formulae.


Preparing test summary report


At the completion of a test cycle, a test summary report is produced. This report gives
insights to the senior management about the fitness of the product for release.



Recommending prod
uct release criteria

One of the purposes of testing is to decide the fitness of a product for release. Testing
can never conclusively prove the absence of defects in a software product. What it
provides is an evidence of what defects exist in the product,
their severity, and impact.
The job of the testing team is to articulate to the senior management and the product
release team

1.

What defect the product has

2.

What is the impact/severity of each of the defects

3.

What would be the risks of releasing the product
with the existing
defects.

The senior management can then take a meaningful business decision on whether to
release given version or not.


8. 6

TEST REPORTING



Testing requires constant communication between the test team and other teams.
Test reporting
is a means of achieving this communication. There are two types of reports
or communication that are required; test incident reports and test summary reports.



Test incident report


A test incident report is a communication that happens through the testin
g cycle
as and when defects are encountered. A test incident report is an entry made in the defect
repository. Each defect has a unique ID and this is used to identify the incident. The high
impact test incidents are highlighted in the test summary report.


Test cycle report


Test projects take place in units of test cycles. A test cycle entails planning and
running certain tests in cycles, each cycle using a different build of a product. A test cycle
report, at the end of each cycle, gives

1.

A summary of the
activities carried out during that cycle;

2.

Defects that were uncovered during that cycle, based on their severity and
impact.

3.

Progress from the previous cycle to the current cycle in terms of defects fixed;

4.

Outstanding defects that are yet to be fixed in t
his cycle; and

5.

Any variations observed in effort or schedule.


Test summary report


The final step in a test cycle is to recommend the suitability of a product for release. A
report that summarizes the results of a test cycle is the test summary report.

There are two types of test summary reports:

1.

Phase
-
wise test summary, which is produced at the end of every phase

2.

Final test summary reports.


A summary report should present



A summary of the activities carried out during the test cycle or phase



Variance o
f the activities carried out from the activities planned



Summary of results which includes tests that failed, with any root cause
descriptions and severity of impact of the defects uncovered by the tests.



Comprehensive assessment and recommendation for rel
ease should include fit for
release assessment and recommendation of release.


Recommending product release


Based on the test summary report, an organization can take a decision on whether to
release the product or not. Ideally an organization would like
to release a product with
zero defects. However, market pressures may cause the product to be released with the
defects provided that the senior management is convinced that there is no major risk of
customer dissatisfaction. Such a decision should be take
n by the senior manager only
after consultation with the customer support team, development team and testing team so
that the overall workload for all parts of the organization can be evaluated.


Best Practices


Best practices in testing can be classified
into three categories.

1.

Process related

2.

People related

3.

Technology related



Process related best practices


A strong process infrastructure and process culture is required to achieve better
predictability and consistency. A process database, a federation of
information about the
definition and execution of various processes can be a valuable addition to the tools in an
organization.


People related best practices


While individual goals are required for the development and testing teams, it is very
important
to understand the overall goals that define the success of the product as a
whole. Job rotation among support, development and testing can also increase the gelling
among the teams. Such job rotation can help the different teams develop better empathy
and
appreciation of the challenges faced in each other’s roles and thus result in better
teamwork.


Technology related best practices


A fully integrated TCDB
-
SCM

DR can help in better automation of testing activities.
When test automation tools are used, i
t is useful to integrate the tool with TCDB, defect
repository and an SCM tool.


A final remark on best practices, the three dimensions of best practices cannot be carried
out in isolation. A good technology infrastructure should be aptly supported by effe
ctive
process infrastructure and be executed by competent people. These best practices are
inter
-
dependent, self
-
supporting, and mutually enhancing. Thus, the organization needs to
take a holistic view of these practices and keep a fine balance among the t
hree
dimensions.


8.7
Check your progress


1.

How will you test the web applications?

2.

Explain the test documentation.

3.

List down the steps involved in test planning and management

4.

How will you make a testing report?

5.

Explain the test process.