8.1Testing Web Applications

blaredsnottyΤεχνίτη Νοημοσύνη και Ρομποτική

15 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

101 εμφανίσεις



Chapter VIII


Testing Web Applications


Testing the documentation


Test planning


Test Management


Test E


Test Reporting


Check your progress

8.1 Testing Web Applications

Web app
lication testing is a collection of related activities with a single goal: to uncover
errors in Web App content, function, usability, navigability, perfo
rmance, capacity and
security. To accomplish this, attesting strategy that encompasses both reviews and
executable testing is applied throughout the web engineering process.

Web engineers and other project stakeholders ( managers, customers, end
users) all

participate in Web App testing. If end
users encounter errors that shake their faith in the
web app, they will go elsewhere for the content and function they need, and the Web App
will fail. For this reason, Web engineers must work to eliminate as many er
rors as
possible before the web app goes on

The Web App testing process begins by focusing on user
visible aspects of the Web App
and proceeds to tests that exercise technology and infrastructure. Seven testing steps are
performed: content, interface
, navigation, component, configuration, performance and
security testing. A suite of test cases is developed for every testing step and an archive of
test results is maintained for future use.

Testing concepts for Web Applications

understand the object
ives of testing within a web engineering context, we must
consider the many dimensions of Web App quality. In the context of this discussion, we
consider quality dimensions that are particularly relevant in any discussion of testing for
Web Engineering wo
rk. We also consider the nature of errors that are encountered as a
consequence of testing, and the testing strategy that is applied to uncover the errors.

Dimensions of quality

Quality is incorporated into a Web application as a consequence of good desi
gn. Both
reviews and testing examine one or more of the following quality dimensions:

is evaluated at both a syntactic and semantic level. At the syntactic level,
spelling, punctuation, and grammar are accessed for text
based documents. At a sema
level, correctness, consistency, and lack of ambiguity are all assessed.

is tested to uncover errors that indicate lack of conformance to customer

is assessed to ensure that it properly delivers Web App content and

is tested to ensure that each category of user is supported by the

is tested to ensure all navigation syntax and semantics are exercised.

is tested under a variety of operating conditions, configur
ations and

is tested in the intent to find errors that are specific to a unique host

is tested to ensure that the Web App properly interfaces with
other applications and databases.

is teste
d by assessing potential vulnerabilities.

The testing process

The testing process for Web Engineering begins with tests that exercise content and
interface functionality that is immediately visible to end
users. The following figure tells
the web app tes
ting process with the design pyramid.

Fig. the testing process


















Content design


Architecture design

Component design

Content testing

Errors in web app content can be as trivial as minor typographical errors or as significant
as incorrect information, imp
roper organization, or violation of intellectual property laws.
Content testing attempts to uncover these and many other problems before the user
encounters them.

Content testing has three important objectives:

To uncover syntactic errors in text based do
cuments, graphical representations,
and other media,

To uncover semantic errors and

To find errors in the organization or structure of content that is presented to the
end user.

Database testing

Modern web applications do much more than the present stat
ic content objects. Web app
interfaces with sophisticated databases and build dynamic content objects that are created
in real
time using the data acquired from a database.

To accomplish this,

A large equities database is queried

Relevant data are extracte
d from the database

The extracted data must be organized as a content object, and

This content object is transmitted to the client environment for display.

User interface testing

The o
verall strategy for interface testing is to

Uncover errors related t
o specific interface mechanisms and

Uncover errors in the way interface implements the semantics of navigation, Web
app functionality, or content display.

To accomplish this strategy, a number of objectives must be achieved:

Interface features are tested
to ensure that design rules, aesthetics, and related
visual content are available for the user without error.

Individual interface mechanisms are tested in a manner that is analogous to unit

Each interface mechanism is tested within the context of
a use case.

The complete interface is tested against selected use
cases to uncover errors in the
semantics of the interface.

The interface is tested within a variety of environments to ensure that it will be

Component level testing

level testing, also called function testing, focuses on a set of tests that attempt
to uncover errors in Web App functions. Component level test cases are often driven by
level input. Once forms data are defined, the user selects a button or other c
mechanism to initiate execution. the following test case design methods are typical:

Equivalence partitioning

Boundary value analysis

Path testing

In addition to these test case design methods, a technique called forced error testing is
used to der
ive test cases that purposely drive the web application component into an error
condition. The purpose is to uncover errors that occur during error

Navigation testing

A user travels through a Web App in much the same way as a visitor walks thro
ugh a
store or museum. The job of navigation testing is

To ensure that the mechanisms that allow the Web App user to travel through the
Web App are all functional and

To validate that each navigation semantic unit can be achieved by the appropriate
user c

The first phase of navigation testing actually begins during interface testing. Navigation
mechanisms are tested to ensure that each performs its intended function. Each of the
following navigation mechanisms should be tested:

Navigation links



Frames and framesets

Site maps

Internal search engines

Some of the tests noted can be performed by automated tools while others are designed
and executed manually. The intent throughout is to ensure that errors in navigation
mechanisms are
found before the Web App goes on
line. Navigation testing, like
interface and usability testing, should be conducted by as many different constituencies
as possible. Early stages of testing are conducted by Web engineers, but later stages
should be conduc
ted by other project stakeholders, an independent testing team, and
ultimately, by non technical users. The intent is to exercise Web App navigation

Configuration testing

Configuration variability and instability are important factors that ma
ke web engineering
a challenge. Hardware, operating systems, browsers, storage capacity, network
communication speeds, and variety of other client
side factors are difficult to predict for
each user. In addition, the configuration for a given user can chan
ge on a regular basis.
The result can be a client
side environment that is prone to errors that are both subtle and

The job of configuration testing is not to exercise every possible client
configuration. Rather, it is to test a set of pr
obable client
side and server side
configurations to ensure that the user experience will be the same on all of them and to
isolate errors that may be specific to a particular configuration.

Security testing

Web app security is a complex subject that mus
t be fully understood before effective
security testing can be accomplished. Web Apps and the client
side and server side
environments in which they are housed represent an attractive target for external hackers,
disgruntled employees, dishonest competitor
s, and anyone else who wishes to steal
sensitive information, maliciously modify content, degrade performance, disable
functionality, or embarrass a person, organization, or business.

Security tests are designed to probe vulnerabilities of the client
environment, the
network communications that occur as data are passed from client to server and back
again, and the server
side environment. Each of these domains can be attacked, and it is
the job of the security tester to uncover weaknesses that can be e
xploited by those with
the intent to do so.

On the client side, vulnerabilities can often be traced to pre
existing bugs in browsers, e
mail programs, or communication software.

On the server
side, vulnerabilities include denial of service attacks and mali
cious scripts
that can be passed along to the client side or used to disable server operations. In
addition, server
side databases can be accessed without authorization.

To protect against these vulnerabilities, one or more of the following security eleme
nts is


a filtering mechanism that is a combination of hardware and software
that examines each incoming packet of information to ensure that it is coming from a
legitimate source, blocking any data that are suspect.


a verification mechanism that validates the identity of all clients
and servers, allowing communication to occur only when both sides are verified.


an encoding mechanism that protects sensitive data by modifying it
in a way that makes it
impossible to read by those with malicious intent. Encryption is
strengthened by using digital certificates that allow the client to verify the destination to
which the data are transmitted.


a filtering mechanism that allows access to the
client or server
environment only by those individuals with appropriate authorization codes.

The actual design of security tests requires in
depth knowledge of the inner workings
of each security element and a comprehensive understanding of a full rang
e of
networking technologies. In many cases, security testing is outsourced to firms that
specialize in these technologies.

Performance testing

Nothing is more frustrating than a web App that makes minutes to load content when
competitive sites download
similar content in seconds. Nothing is more aggravating than
trying to log
in to a Web App and receiving a server
busy message, with the suggestion
that you try again later. Nothing is more disconcerting than a Web App that responds
instantly in some situa
tions, and then seems to go into an infinite wait state in other
situations. All of these occurrences happen on the Web every day, and all of them are
performance related.

Performance testing is used to uncover performance problems that can result from lac
k of
side resources, inappropriate network bandwidth, inadequate database capabilities,
faulty or weak operating system capabilities, poorly designed Web App functionality, and
other hardware or software issues that can lead to degraded client
r performance.

The intent is:

To understand how the system responds to loading and

To collect metrics that will lead to design modifications to improve performance.

Performance testing objectives

Performance tests are designed to simulate real
world loa
ding situations. As the number
of simultaneous Web App users grows, or the number of on
line transactions increases,
or the amount of data increases, performance testing will help answer the following

Does the server response time degrade to a
point where it is noticeable and

At what point does performance become unacceptable?

What system components are responsible for performance degradation?

What is the average response time for users under a variety of loading conditions?

es performance degradation have an impact on system security?

Is Web App reliability or accuracy affected as the load on an the system grows?

What happens when loads that are grates than maximum server capacity are

To develop answers to these q
uestions, two different performance tests are conducted:

Load testing

real world loading is tested at a variety of load levels and in a
variety of combinations.

Stress testing

loading is increased to the breaking point to determine how much
the Web App environment can handle.

Load testing

The intent of load testing is to determine how the Web App and its server side
environment will respond to various loading conditions. As testing proceeds,
permutations to the following variables define a
set of test conditions:


the number of concurrent users


the number of on
line transactions per user per unit time


the data load processed by the server per transaction

As each test condition is run, one or more of the following measures are c
average user response, average time to download a standardized unit of data, or average
time to process transaction. Load testing can also be used to assess recommended
connection speeds for users of the Web App. Overall throughput, P is computed
in the
following manner:

P = N * T * D

Stress testing

Stress testing is a continuation of load testing, but in this instance the variables, N, T, and
D are forced to meet and then exceed operational limits. The intent of these tests is to
answer each o
f the following questions:

Does the system degrade “gently” or does the server shut down as capacity is

Does server software generate “server not available” messages? More generally,
are users aware that they cannot reach the server?

Does the
server queue requests for resources and empty the queue once capacity
demands diminish?

Are transactions lost as capacity is exceeded?

Is data integrity affected as capacity is exceeded?

What values of N, T, and D force the server environment to fail?

If the system does fail, how long will it take to come back on

Are certain Web App functions discontinued as capacity reaches the 80 or 90
percent level?

A variation of stress testing is sometimes referred to as spike/bounce testing. In this
regime, load is spiked to capacity, then lowered quickly to normal operating
conditions, and then spiked again. By bouncing system loading a tester can determine
how well the server can marshall resources to meet very high demand and then release
them whe
n normal conditions reappear.

Testing documentation and help facilities

The term software testing conjures images of large numbers of test cases prepared to
exercise computer programs and the data that they manipulate. It is important to note that
esting must also extend to the third element of the software configuration


Errors in documentation can be as devastating to the acceptance of the program as errors
in data or source code. Nothing is more frustrating than following a user g
uide or an on
line help facility exactly and getting results or behaviors that do not coincide with those
predicted by the documentation. It is for this reason that documentation testing should be
meaningful part of every software test plan.

testing can be approached in two phases. The first phase, review and
inspection, examines the documentation for editorial clarity. The second phase, live test,
uses the documentation in conjunction with the use of the actual program.

The following questio
ns should be answered during documentation testing:

Does the documentation accurately describe how to accomplish each mode of

Is the description of each interaction sequence accurate?

Are examples accurate?

Are terminology, menu descriptions, and sys
tem responses consistent with the
actual program?

Is it relatively easy to locate guidance within the documentation?

Can troubleshooting be accomplished easily with the documentation?

Are the documentation table of contents and index accurate and complete?

Is the design of the document( layout, typefaces, indentation, graphics)
conductive to understanding and quick assimilation of information?

Are all software error messages displayed for the user described in more detail in
the document? Are actions to be
taken as a consequence of an error message
clearly delineated?

If hypertext links are used, are they accurate and complete?

If hypertext is used, is the navigation design appropriate for the information

The only viable way to answer these questi
ons is to have an independent third party test
the documentation in the context of program usage. All discrepancies are noted and areas
of document ambiguity or weakness are defined for potential rewrite.


Preparing a test plan


like any project
should be driven by a plan. The test plan covers the

What needs to be tested

the scope of testing, including clear
identification of what will be tested and what will not be tested.

How the testing is going to be performe

What resources are needed for testing
computer as well as human

The time lines by which the testing activities will be performed.

Risks that may be faced in all the above, with appropriate mitigation and
contingency plans.

Scope management:

One single plan can be prepared to cover all phases or there can be separate plans for
each phase. In situations where there are multiple test plans, there should be one test plan,
which covers the activities common for all plans. This is called the
er test plan

Scope management pertains to specifying the scope of a project. For testing, scope
management entails


Understanding what constitutes a release of a product


Breaking down the release into features


Prioritizing the features of testing


which features will be tested and which will not be and


Gathering details to prepare for estimation of resources for testing.

Knowing the features and understanding them from the usage perspective will enable the
testing team to prioritize the features f
or testing. The following factors drive choice and
prioritization of features to be tested.

Features that are new and critical for the release

The new features of a release set the expectations of the customers and must perform
properly. These new featur
es result in new program code and thus have a higher
susceptibility and exposure to defects.

Features whose failures can be catastrophic

Regardless of whether a feature is new or not, any feature the failure of which can be
catastrophic has to be high on
the list of features to be tested. For example, recovery
mechanisms in a database will always have to be among the most important features to be

Features that are expected to be complex to test

Early participation by the testing team can help ide
ntify features that are difficult to test.
This can help in starting the work on these features earl and line up appropriate resources
in time.

Features which are extensions of earlier features that have been defect prone

Defect prone areas need very tho
rough testing so that old defects do not creep in again.

Deciding test approach/strategy

Once we have this prioritized feature list, the next step is to drill down into some more
details of what needs to be tested, to enable estimation of size, effort, a
nd schedule. This
includes identifying


What type of testing would you use for testing the functionality?


What are the configurations or scenarios for testing the features?


What integration testing is followed to ensure these features work together?


localization validations would be needed?


What non
functional tests would you need to do?

The test approach should result in identifying the right type of test for each of the features
or combinations.

Setting up criteria for testing

There must be cl
ear entry and exit criteria for different phases of testing. Ideally,
tests must be run as early as possible so that the last minute pressure of running tests after
development delays is minimized. The entry criteria for a test specify threshold criteria
or each phase or type of test. The completion/exit criteria specify when a test cycle or a
testing activity can be deemed complete.

Suspension criteria specify when a test cycle or a test activity can be suspended.

Identifying responsibilities, staff
ing, and training needs

A testing project requires different people to play different roles. There are roles
for test engineers, test leads, and test managers. The different role definitions should

1. Ensure there is clear accountability for a given tas
k, so that each person knows
what he has to do;

2. Clearly list the responsibilities for various functions to various people

3. Complement each other, ensuring no one steps on an others’ toes; and

4. Supplement each other, so that no task is left unassi

Staffing is done based on estimation of effort involved and the availability of time
for release. In order to ensure that the right tasks get executed, the features and tasks are
prioritized the basis of an effort, time and importance.

It may n
ot be possible to find the perfect fit between the requirements and
availability of skills, they should be addressed with appropriate training programs.

Identifying resource requirements

As a part of planning for a testing project, the project manager sh
ould provide estimates
for the various hardware and software resources required. Some of the following factors
need to be considered.


Machine configuration needed to run the product under test


Overheads required by the test automation tool, if any


ng tools such as compilers, test data generators, configuration
management tools, and so on


The different configurations of the supporting software that must be present


Special requirements for running machine
intensive tests such as load tests
and perform
ance tests


Appropriate number of licenses of all the software

Identifying test deliverables

The test plan also identifies the deliverables that should come out of the test cycle/testing
activity. The deliverables include the following,


The test plan i


Test case design specifications


Test cases, including any automation that is specified in the plan


Test logs produced by running the tests


Test summary reports

Testing tasks: size and effort estimation

The scope identified above gives a broad ov
erview of what needs to be tested.
This understanding is quantified in the estimation step. Estimation happens broadly in
three phases.


Size estimation


Effort estimation


Schedule estimation

Size estimation

Size estimate quantifies the actual amount of te
sting that needs to be done. The factors
contribute to the size estimate of a testing project are as follows:

Size of the product under test

Line of Code(LOC), Function Point(FP) are the popular
methods to estimate the size of an application. A somewhat
simpler representation of
application size is the number of screens, reports, or transactions.

Extent of automation required

Number of platforms and inter
operability environments to be tested

Productivity data

Reuse opportunities

Robustness of processes

Activity breakdown and scheduling

Activity breakdown and schedule estimation entail translating the effort required into
specific time frames. The following steps make up this translation.

Identifying external and internal dependencies among the activiti

Sequencing the activities, based on the expected duration as well as on the

Monitoring the progress in terms of time and effort

Rebalancing schedules and resources as necessary

Communications management

Communications management consists o
f evolving and following procedures for
communication that ensure that everyone is kept in sync with the right level of detail.

Risk management

Like every project, testing projects also face risks. Risks are events that could potentially
affect a project’
s outcome. Risk management entails

Identifying the possible risks;

Quantifying the risks;

Planning how to mitigate the risks; and

Responding to risks when they become a reality.

Aspects of risk management


Risk identificatio
consists of identifying the possible risks that may hit a
project. Use of checklists, Use of organizational history and metrics and
informal networking across the industry are the common ways to identify
risks in testing.


Risk quantification
deals with e
xpressing the risk in numerical terms.
of the risk happening and the
of the risk are the
two components to the quantification of risk.










Risk mitigation planning
deals with identifying alternative strategies to
combat a risk event. To
handle the effects of a risk, it is advisable to have
multiple mitigation strategies.

The following are some of the common risks encountered in testing projects:

Unclear requirements,

Schedule dependence,

Insufficient time for testing,

Show stopper def

Availability of skilled and motivated people for testing and

Inability to get a test automation tool.


Choice of standards

Standards comprise an important part of planning in any organization. There are two
types of standard

external standards and internal standards

External standards
are standards that a product should comply with, are externally
visible, and are usually stipulated by external consortia. Compliance to external standards
is usually mandated by external p

Internal standards
are standards formulated by a testing organization to bring in
consistency and predictability. They standardize the processes and methods of working
within the organization. Some of the internal standards include

Naming and sto
rage conventions for test artifacts

Every test artifact have to be
named appropriately and meaningfully. Such naming conventions should enable
identification of the product functionality
that a set of tests are intended for; and
identify the functionality corresponding to a given set of tests.

Document standards

Most of the discussion on documentation and coding standards pertain to automated
testing. Documentation standards specify how to capture information about the tests
thin the test scripts themselves. Internal documentation of test scripts are similar to
internal documentation of program code and should include the following:

Appropriate header level comments at he beginning of the file that outlines the
functions to be
served by the test.

Sufficient in
line comments, spread throughout the file, explaining the functions
served by the various parts of a test script.

date change history information, recording all the changes made to the test

Test coding standa

Test coding standards go one level deeper into the tests and enforce standards on how the
test themselves are written. The standards may


Enforce the right type of initialization


Stipulate ways of naming variables within the scripts to make sure that

a reader understands consistently the purpose of a variable.


Encourage reusability of test artifacts.


Provide standard interfaces to external entries like operating system,
hardware, and so on.

Test reporting standards

Since testing is tightly interlink
ed with product quality, all the stakeholders must get a
consistent and timely view of the progress of tests. The test reporting provides guidelines
on the level of detail that should be present in the test reports, their standard formats and
contents, rec
ipients of the report, and so on.

Test infrastructure management

Testing requires a robust infrastructure to be planned upfront. This infrastructure is made
up of three essential elements.


A test case database (TCDB )


A defect repository (DR )


ion management repository and tool

A test case database captures all the relevant information about the test cases in an

defect repository
captures all the relevant details of defects reported for a product.
Most of the metrics classified
as testing defect metrics and development defect metrics
are derived out of the data in defect repository.

Yet another infrastructure that is required for a software product organization is a
Software Configuration Management (SCM) repository. An SCM repo
sitory keeps track
of change control and version control of all the files that make up a software product.
Change controls ensures that

Changes to test files are made in a controlled fashion and only with proper

Changes made by one test engine
er are not accidentally lost or overwritten
by other changes.

Each change produces a distinct version of the file that is recreatable at
any point of time.

At any point of time, everyone gets access to only the most recent version
of the test files.

on control ensures that the test scripts associated with a given release of a product
are base lined along with the product files.

TCDB, Defect Repository, and SCM repository should complement each other and work
together in an integrated fashion.


relationship SCM, DR and TCDB




Test case



Test case


Test case


Test case
























Test people management

People management is an integral part of any project management. It requires the ability
to hire, motivate and retain
the right people. These skills are seldom formally taught.
Testing projects present several additional challenges. We believe that the success of a
testing organization depends on judicious people management skills.

The important point is that the common
goals and the spirit of teamwork have to
be internalized by all the stakeholders. Such an internalization and upfront team building
has to be part of the planning process for the team to succeed.

Integrating with product release

Ultimately, the success
of a product depends on the effectiveness of integration of the
development and testing activities. These job functions have to work in tight unison
between themselves and with other groups such as product support, product management,
and so on. The sched
ules of testing have to be linked directly to product release. The
following are some of the points to be decided for this planning.

Sync points between development and testing as to when different types of testing
can commence.

Service level agreements
between development and testing as to how long it
would take for the testing team to complete the testing. This will ensure that
testing focuses on finding relevant and important defects only.

Consistent definitions of the various priorities and severities
of the defects.

Communication mechanisms to the documentation group to ensure that the
documentation is kept in sync with the product in terms of known defects,
workarounds and so on.

The purpose of the testing team is to identify the defects in the
product and the risks that
could be faced by releasing the product with the existing defects.



Putting together and base lining a test plan

A test plan combines all the points discussed above into a single document that acts as an
chor point for the entire testing project. An organization normally arrives at a template
that is to be used across the board. Each testing project puts together a test plan based on
the template. The test plan is reviewed by a designated set of competent
people in the
organization. It then is approved by a competent authority, who is independent of the
project manager directly responsible for testing. After this, the test plan is base lined into
the configuration management repository. From then on, the ba
se lined test plan becomes
the basis for running the testing project. In addition, periodically, any change needed to
the test plan templates are discussed among the different stake holders and this is kept
current and applicable to the testing teams.

t case specification

Using the test plan as the basis, the testing team designs test case specifications, which
then becomes the basis for preparing individual test cases. A test case is a series of steps
executed on a product, using a pre
defined set o
f input data, expected to produce a pre
defined set of outputs, in a given environment. Hence, a test case specification should
clearly identify,

The purpose of the test: this lists what feature or part the test is intended for.

Items being tested, along
with their version/release numbers as appropriate.

Environment that needs to be set up for running the test case.

Input data to be used for the test case.

Steps to be followed to execute the test

The expected results that are considered to be correct resul

A step to compare the actual results produced with the expected results

Any relationship between this and other tests

Update of traceability matrix

A traceability matrix is a tool to validate that every requirement is tested. This matrix is
during the requirements gathering phase itself by filling up the unique identifier
for each requirement. When a test case specification is complete, the row corresponding
to the requirement which is being tested by the test case is updated with the test ca
specification identifier. This ensures that there is a two
way mapping between
requirements and test cases.

Identifying possible candidates for automation

Before writing the test cases, a decision should be taken as to which tests are to be
and which should be run manually. Some of the criteria that will be used in
deciding which scripts to automate include

Repetitive nature of the test

Effort involved in automation

Amount of manual intervention required for the test, and

Cost of automation t

Developing and base lining test cases

Based on the test case specifications and the choice of candidates for automation, test
cases have to be developed. The test cases should also have change history
documentation, which specifies

What was the chan

Why the change was necessitated

Who made the change

When was the change made

A brief description of how the change has been implemented and

Other files affected by the change

All the artifacts of test cases

the test scripts, inputs, scripts, expected
outputs, and so on
should be stored in the test case database and SCM.

Executing test cases and keeping traceability matrix current

The prepared test cases have to be executed at the appropriate times during a
project. For example, test cases correspo
nding to smoke tests may be run on a daily basis.
System testing test cases will be run during system testing.

As the test cases are executed during a test cycle, the defect repository is updated with


Defects from the earlier test cycles that are fixed in
the current build and


New defects that get uncovered in the current run of the tests.

During test design and execution, the traceability matrix should be kept current. When
tests get designed and executed successfully, the traceability matrix should be u

Collecting and analyzing metrics

When tests are executed, information about the test execution gets collected in test logs
and other files. The basic measurements from running the tests are then converted to
meaningful metrics by the use of appro
priate transformations and formulae.

Preparing test summary report

At the completion of a test cycle, a test summary report is produced. This report gives
insights to the senior management about the fitness of the product for release.

Recommending prod
uct release criteria

One of the purposes of testing is to decide the fitness of a product for release. Testing
can never conclusively prove the absence of defects in a software product. What it
provides is an evidence of what defects exist in the product,
their severity, and impact.
The job of the testing team is to articulate to the senior management and the product
release team


What defect the product has


What is the impact/severity of each of the defects


What would be the risks of releasing the product
with the existing

The senior management can then take a meaningful business decision on whether to
release given version or not.

8. 6


Testing requires constant communication between the test team and other teams.
Test reporting
is a means of achieving this communication. There are two types of reports
or communication that are required; test incident reports and test summary reports.

Test incident report

A test incident report is a communication that happens through the testin
g cycle
as and when defects are encountered. A test incident report is an entry made in the defect
repository. Each defect has a unique ID and this is used to identify the incident. The high
impact test incidents are highlighted in the test summary report.

Test cycle report

Test projects take place in units of test cycles. A test cycle entails planning and
running certain tests in cycles, each cycle using a different build of a product. A test cycle
report, at the end of each cycle, gives


A summary of the
activities carried out during that cycle;


Defects that were uncovered during that cycle, based on their severity and


Progress from the previous cycle to the current cycle in terms of defects fixed;


Outstanding defects that are yet to be fixed in t
his cycle; and


Any variations observed in effort or schedule.

Test summary report

The final step in a test cycle is to recommend the suitability of a product for release. A
report that summarizes the results of a test cycle is the test summary report.

There are two types of test summary reports:


wise test summary, which is produced at the end of every phase


Final test summary reports.

A summary report should present

A summary of the activities carried out during the test cycle or phase

Variance o
f the activities carried out from the activities planned

Summary of results which includes tests that failed, with any root cause
descriptions and severity of impact of the defects uncovered by the tests.

Comprehensive assessment and recommendation for rel
ease should include fit for
release assessment and recommendation of release.

Recommending product release

Based on the test summary report, an organization can take a decision on whether to
release the product or not. Ideally an organization would like
to release a product with
zero defects. However, market pressures may cause the product to be released with the
defects provided that the senior management is convinced that there is no major risk of
customer dissatisfaction. Such a decision should be take
n by the senior manager only
after consultation with the customer support team, development team and testing team so
that the overall workload for all parts of the organization can be evaluated.

Best Practices

Best practices in testing can be classified
into three categories.


Process related


People related


Technology related

Process related best practices

A strong process infrastructure and process culture is required to achieve better
predictability and consistency. A process database, a federation of
information about the
definition and execution of various processes can be a valuable addition to the tools in an

People related best practices

While individual goals are required for the development and testing teams, it is very
to understand the overall goals that define the success of the product as a
whole. Job rotation among support, development and testing can also increase the gelling
among the teams. Such job rotation can help the different teams develop better empathy
appreciation of the challenges faced in each other’s roles and thus result in better

Technology related best practices

A fully integrated TCDB

DR can help in better automation of testing activities.
When test automation tools are used, i
t is useful to integrate the tool with TCDB, defect
repository and an SCM tool.

A final remark on best practices, the three dimensions of best practices cannot be carried
out in isolation. A good technology infrastructure should be aptly supported by effe
process infrastructure and be executed by competent people. These best practices are
dependent, self
supporting, and mutually enhancing. Thus, the organization needs to
take a holistic view of these practices and keep a fine balance among the t

Check your progress


How will you test the web applications?


Explain the test documentation.


List down the steps involved in test planning and management


How will you make a testing report?


Explain the test process.