Automated Testing - Faculty Web Directory

soilkinkajouInternet and Web Development

Feb 2, 2013 (4 years and 6 months ago)

508 views






Automated Testing

Philosophy, Tools & Techniques


Software developers are constantly looking for new ways to increase
productivity while maintaining a level of quality in the
software they
produce. Automated testing has become the focus of Quality Assurance,
promising relief from the shrinking in a very competitive market.The
purpose of this study is to explain the current philosophy of automated
testing, the advantages and d
isadvantages of removing the human element
from the testing process, and to detail current automation methodologies
and popular tools.


MJ Hornbuckle

W2008


2


Contents

1

Introduction

................................
................................
................................
................................
...........

3

2

Background

................................
................................
................................
................................
............

3

3

Why Automate Testing

................................
................................
................................
...........................

3

4

What

to Automate

................................
................................
................................
................................
..

4

5

Automation Pre
-
Requisites

................................
................................
................................
.....................

5

6

Writing Effective & Efficient Automated Tests

................................
................................
......................

6

6.1

Reusable Tests

................................
................................
................................
.............

6

6.2

Understandable Tests

................................
................................
................................
...

7

6.3

Maintainable Tests

................................
................................
................................
.......

7

6.4

Modular Tests

................................
................................
................................
..............

7

7

Automation Techniques & Tools

................................
................................
................................
............

7

7.1

xUnit Testing Frameworks

................................
................................
............................

8

7.1.1

YourUnit

................................
................................
................................
................................
..............

8

7.1.2

xUnit
Architecture

................................
................................
................................
...............................

9

7.1.3

Desktop GUI Testing with xUnit
................................
................................
................................
.......

10

7.1.4

Database Testing and Mock Objects with xUnit

................................
................................
...............

10

7.2

Web Application Testing

................................
................................
.............................

11

7.2.1

Watir

................................
................................
................................
................................
..................

11

7.2.2

Selenium

................................
................................
................................
................................
............

12

7.2.3

Record &

Playback

................................
................................
................................
............................

12

8

Conclusion

................................
................................
................................
................................
...........

12

9

References

................................
................................
................................
................................
...........

13

10

APPENDICES

................................
................................
................................
................................
..

14

10.1

APPENDIX A: Some xUnit Family Members (18)

................................
......................

14

10.2

APPENDIX B: Some xUnit extensions (19)

................................
...............................

17

10.3

APPENDIX C: Employment Advertisement for Automated Tester. (23)

.......................

18

10.4

APPENDIX D: Sample spreadsheet for a “Key
-
Word” or “Test Plan Driven” test

......

19

10.5

APPENDIX F: The Selenium TestRunner

................................
................................
..

20

10.6

APPENDIX G: Review Questions

................................
................................
.............

21





3


1

Introduction

Automated testing is a technique used by software
developers ultimately to save time and resources while
still ensuring a quality product. Automated tests include
scripts that when executed,
simulate human interaction
with an application, test the performance of an
application under stress or high loads and ensure an
application still functions correctly after updates or
additions have been made

by developers
.
The ability to
simulate

human interaction with an application implies
that automated
testing is useful for primarily

functional
and ac
ceptance testing. Although the
value of
simu
lation is much appreciated as
a time saver

and

more
consistent than its human counterpart
, it is imp
ortant to
know that
a
utomated testing has value beyond
functional testing
and
testing related to
the final stages
of a development project.

For example, a
utomated
testing
is often

utilized in the realm of u
nit testing
, an
integral part of th
e entire
softw
are
testing process

encompassing

continuous integration

and regression
testing
, and an area for which many
unit testing
frameworks
are now available
to assist
in
the design

and
implementation

of automated unit t
esting scripts
.
Properly implemented,
automated

testing

help
s

to
ensure flaws

in a software application

will be
discovered and fixed as soon as possible prior to demo
or delivery

while at the same time conserving valuable
resources
.

The benefits of automated testing can be fully realized
by a
development team only if the pros and cons of
automated testing are carefully considered. These pros
and cons and the consequences of each can differ
widely between different types of development projects,
and choices based on these considerations can mak
e or
break an automated testing effort. A developer needs to
determine if and when automation is appropriate, what
tool(s) will be the most efficient, and how to design
tests in such a way that they can be easily maintained
and ported to other relevant en
vironments. Currently,
there is an abundance of automated software tools
available


both open source and commercial


to assist
developers with
the design and execution of

automated
tests. Knowledge and
effective

utilization of these
existing tools will

undoubtedly improve the value of
automated testing efforts.

2

Background

Automated testing existed in the days of MS
-
DOS and
comprised of

simple

calls to the command line. The
introduction of distributed systems required testing to
include API calls across a network.

(1)

In 1984, Kent
Beck wrote SUnit

(2)
, which claims to be “the mother
of

all unit testing frameworks”

(3)
. SUnit began as an
incredibly simple framework with only four classes:
SetTestCase, TestCase, TestSuite and TestResult. In
1997 Kent Beck and Erich Gamma flew from Zurich to
attend the 1997 O
OPSLA in Atlanta and it was during
this flight that JUnit was created


assimilating from
SUnit the simplicity in design that had been so attractive
among the SmallTalk community.

JUnit soon

became
the first widely known m
ember of a family of
frameworks th
at span almost every programming
language: xUnit

(4)
.

Over time t
he character
and the methodology
of
automat
ed testing has evolved

significantly and reflects
the maturation of application software interfaces

(1)

as
well as software development methods such as agile
.
Developer experiences with automated testing have
been distributed in print and over the web, resulting in
the development of prescriptive methods that promise a
succ
essful
automated
testing effort.

The Extreme
Programming (XP) development process and other agile
processes sing the praises of Test Driven Development
(TDD)

(5)
. TDD requires tests to be run frequently and
agile developers f
ind productivity greatly enhanced by
utilizing test automation.
Because agile methods and
testing go hand in hand, it’s no wonder that the
popularity of agile development methods is directly
related to the
increased interest in automated testing
tools and

methods

among developers. The development
and ensuing popularity of JUnit has been credited with
helping improve attitudes toward automated testing in
general

(4)
.

Today, there are enumerable automated software
testing tools
and other resources available



both
commercial and open source.

Software developers are
expected to know how to test efficiently and effectively.

Appendix B is an employment

advertisement for a
position available

March 9
, 2008 on
www.computerjobs.com
, s
eeking test managers

experienced in both QualityTest Professional(QTP) and
Load Runner (commercial automated testing tools
marketed by Hewlett Packard)
. The pay rate offered is
between $75,000 and $95,000.
The fact that c
ompanies

such as this one are

willing to pay high salaries for
experience in automated
quality testing truly understand
both the benefits and the necessity of
automated
testing
in order to produce high quality products and thus
compete in today’s software development industry.

3

Why Aut
omate Testing

“Theoretically, testing should never be necessary in the
first place. The programs should be coded correctly first
off!”

(6)

In the real world, there is no such thing as bug free
software and
this
reality insists on testing. If

the
proliferation of automated testing tools is not
convincing enough that automated testing has earned a
rightful place in software development
, there are plenty
of
compelling

arguments to rally support for the
movement.

Am
ong proponents of automated testing
there are
three

recurring themes



often
found
in the
comparison between manual and a
utomated testing:

4


efficiency
, consistency

and

frequency
. These three
themes highlight the pros of automated testing.

In his 2006 article
“Why Automate Testing?”,
written
for
Embedded Computing Design Resource Guide
,
Kingston Duffie pinpoints the reasoning behind
embracing an automation test plan


more now than
ever before.
His

claim is that automated testing is
an
effe
ctive solution to the problems introduced to the
software development process by
m
arket pressure and
the need for

higher level
s

of productivity
in shorter and
shorter amounts of time.

These issues are further
complicated by the requirements of testing mor
e and
more features added to existing software products.

“Testers must deal with rapidly growing test plans in a
timeline reduced by market deadlines driven by
competition”
(7)
.

Automated
tests enable faster
execution,
facilitate greater test coverage
,

deliver higher
test accuracy and find more defects earlier
(8)
.

The
efficiency of automated testing allows programmers to
tackle
large test plans and still
keep up with the
competition

and higher efficiency is generally related to
a greater amount of cost control


a benefit that software
producers must not ignore
.

Automated testing provides a m
easure of
consistency

that improves accuracy
.

A software test plan often
includes a level of repetitiveness that
can quickly
become

strenuous

on a manual tester, causing

fatigue
and increasing the chances of error

as well as the
possibility that some tests will not be run at all due to
time constraint
s or simply surrendering to the
temptation to skip tests
. Documenting the results is also
taxing


a weary tester is prone to inaccuracy in the
reporting of test results. A computer does not get tired
and is not tempted to skip tests. A computer can
con
sistently run a battery of
tests and automatically
record the results to a database.
(9)

There are some
cases where automating tests is

the only feasible way to
fully test a feature


particularly those which

require
multiple variations of data for the same test case.

Frequently, especially during regression testing, an
enti
re test plan must be repeated.
“Realisticly,
automation is the only means to repeat all tests for
large, complex systems”
(10)
.


In the circumstance
where there exists
an

overh
ead of a large test plan
composed of
a large number

of test cases, much of them
repetitive,

the consistency and accuracy of automated
testing is undoubtedly preferred over error
-
prone
manual testing.

Because of the nature of software to change over
time, testing must be
performed

frequently in order to
make sure that new features work and that e
xisting
features haven’t been broken. One area where the
frequency of testing can be daunting is
when using

the
technique of continuous integration. C
ontinuous
integration
has become recognized as a valuable
practice in software development projects.
A
software
development project utilizing continuous integration
requires

that developers integrate their work frequently
-

ideally several times per day.
Each integration
requires

testing in order to detect errors and incompatibilities as
soon as
they occur
, reducing th
e amount of
programming

time to find and fix newly introduced
bugs.

(11)

Because integration is frequent, testing
related to continuous integration is also frequent.

Another area where the frequency of testing can become
a concern is in the process of unit testing. Whenever a
function or module of code has been modified, a
developer may wish to perform unit tests to ensure
working code. Both unit testing and the te
sting
involved in continuous integration are candidates for the
benefits that automated testing can offer


specifically
that they can be run much faster and efficiently than a
manual test plan could be completed. The tests can be
repeated much more frequ
ently than a human being
would be capable of and the results of the tests are
available much sooner. The ability to run tests
frequently is a huge advantage
of

automated testing.

Efficiency, consistency and frequency are the most
apparent benefits of
auto
mated testing, but there are
several other reasons that developers may choose this
route, many of them indirectly related to the reasons
already discussed. Additional arguments for
writing
automated tests and performing
automated testing
include: the abi
lity to allow testing to be
performed

by
less skilled staff
, the potential for developing
programming skills, improving test coverage simply by
increasing the number of tests that can be run in a
reasonable amount of time, reducing the costs of testing
by
reducing manual labor, and also to simply make
testing more interesting.
(12)

4

What

to Automate

Once the decision has been made to automate, what
types of test
s

should

be automated?
Automated testing
is not a panacea and automating every test is an
unrealistic expectation and a goal that can prove
extremely
costly to a development project.

Many
organizations

find that

they are automating around 60
percent of

their total n
umber of tes
t cases, leaving 40
percent

of tests to be conducted manually.

(8)

Every
software project has its own goals and priorities and
these must be carefully considered

in order to
ensure
that
the benefits of automation are
maximi
ze
d and
that
automated test development

do
es

not end up

being

detriment
al

to the
development

effort

as a whole
. There
are, however,
some test characteristics that will

clearly
segregate
tests

into the

automat
ed

and manual camps
.
Armed with this knowledge,
one can
quickly identify

which tests can be automated easily and which tests
would simply be more of a hassle to automate th
a
n any
benefit
would be worth
.
Two surefire indicators that
automated testing will benefit a project are the existence
of
component
s that require large amounts
of data entry

5


and
areas that
require

testing
with many different
vari
ations of the same type of data.

(13)

These
characteristics are prime examples of potential data
-
driven automated tests. (Data
-
driven testing is a method
of automated testing explained in section
6.1
.)


There are several more
c
onsiderations

that will help to
determine
which tests should b
e automated
.

For tests
that are often performed manually, such as functional or
system tests, t
he initial focus should be on tests that are
currently takin
g

the most time to run manually.
Although it is tempting to automate tasks that may
otherwise go un
tested, it is best to automate tests that
have already been manually run and have an established
testing procedure. By automating existing manual tests,
it is
less

likely that an automated version will

suffer
from efficiency problems
that
are not realized until after
the test was
automated



often requiring an entire
redesign of the automated test.

(6)

Acceptance tests,
compatibility tests, performance tests and regression
tests are all sets of tests that will

be run repeatedly
throughout the development process. If
a test belongs in
any of these categories, if
the test steps are

well
-
defined
and

it
is
useful

or even necessary

to repeat the
sequence
of steps many times, the test should

be considered for
automa
tion.

Another category of tests that should almost always
be automated are tests that are testing non
-
UI aspects of
a program. (The unique challenges of automating UI
centered tests are addressed in section
s

7.1.3
.)
If it

is
not possible to automate an

e
ntire

test

due to some
complex actions, consider automating a portion of the
test. An example of
automating a portion of a test
would be the automation of setup actions that are
required before certain manual tests can take place.
Even automating portion
s of a test can speed up test
execution time.

(9)

Unit tests are a large category of
tests that are non
-
UI centric (or should be) and in
g
eneral, all unit tests should be automated


especially
in a continuous integration
environment. Although
writing an automated test is preferably done after the
test has been written and performed manually, unit
testing is unique in that the tests are so low
-
level, they
may simply be impossible to perform manually. Also,

to meet the nee
ds of test documentation,

the automated
unit testing frameworks available today allow
developers to writ
e tests that are self
-
documenting
.
(How to write effective automated unit tests is discussed
in section
6
.)


On the opposite end

of the spectrum are
tests that
probably should
remain manual and be left out of the
automation process
.

The ability to recognize

these
types of tests will prevent
curtailing

the benefits of
automation in other areas.
It is important to note that
automating a test is often m
ore time consuming and
expensive than running a test manually once

(14)
.
If it
can be determined that automating a
particular

test

is
more
resource intensive tha
n manually testing once,
then
the question of how much more
should be a
consideration before proceeding with automating the
test
. Tests that lie in this category include those that are
currently testing highly volatile pieces of code. Until
these pieces of code have stabilized, an automated test
will likely fail
every time it is run and be fairly useless
in finding a legitimate bug.

The cost of automating is
not limited to the resources of time and money. The
purpose of any testing, manual or automated, is to find
bugs. If automating a test does not result in r
evealing
more bugs compared to its manual counterpart or if
other manual tests are abandoned in order to provide
enough resources to write the automated test, then the
value of the automated test quickly comes into question.
“I first learned to think abou
t the cost of automation in
this way during conversations with Cem Kaner. Noel
Nyman points out that it’s a special case of John Daly’s
Rule, which has you always ask
this question of any
activity: ‘
What bugs aren’t
I finding while I’m doing
that?’”

(14)

5

Automat
ion
Pre
-
Requisites

“The point of automation is not to eliminate testers
but to make better use of their time.


(15)

Introducing
automation to a development project
takes
careful
planning
in
order to fully realize its benefit. There are
two pre
-
requisites that must be met before test
automation should be implemented in any software
development proj
ect. The first is documentation of
detailed test cases including the sequence of actions to
be
taken and the expected results for each case.

(Recall
the earlier discussion about self
-
documenting unit test
cases.)
The second is a test environment that is
completely independent of the deve
lopment
environment. If a datab
ase is in use, then there
should
be a test database that can easily be restored with known
data. These pre
-
requisites represent a formalized testing
process that already exists in the development
environment

and is

necessary for the success of any
attempt at automation
.

(16)

Before beginning to automate, make sure that the
manual tests are well documented


including explicit
input values (or guidelines on how to make them up)
and expected outputs. Testers will need to know exactly
how the test works i
n order to execute it effectively and
a future

developer may need to modify the test

to
meet

new requirements
if necessary.

Time is quickly
swallowed up and the investment in automation is lost if
automated testing scripts are hard to follow and
interpret
. Also, in the event there is not time to
automate a particular test, there is the advantage that a
well documented manual test can be performed easily
by any qualified tester.
(12)

It is important to remember
the true purpos
e of testing


to find bugs
-

not to
squander resources
gratuitously

on automation.


6


Make sure a well
-
developed automation test plan is in
place with defined goals.
“Your automated testing goal
should not be automation for automation’s sake.”

(15)


Valid goals to be achieved from automation include
speed, frequency, improved test coverage, consistency
and reliability. The most successful test automation
projects choose wisely which tests will be automated
and whic
h will not.

If measurements for success have
been established, it will be easy to gauge progress and
determine where more or less time should be spent on
automation. Some examples of automation measures
include the following questions which probe the act
ual
benefits received

from automation

(15)
:





A
re the same number of features tested in less
time?



Are more features tested

in the same amount of
time
?



Are current testing costs less than previous
testing costs?


There should be a thorough understanding of the
complexitites
and unique requirements
of each type of
testing

(15)
:




I
ntegration
/Regression


tests
that are run every
build; white
box unit tests



DDT


data
-
driven testing utiliz
ing multiple data
sets over many iterations



Configuration


tests performed on many
different platforms



Test bed setup


setup and configuration required
on test machines prior to testing



Backend/Non
-
GUI/Database


tests that verify
data collected by an ap
plication



Performance



load tests or stress tests



Functional/System


tests that verify the
interaction between modules



GUI


blackbox testing of the user interface



Acceptance


tests that determine the clients
acceptance of the product


The ability to select an appropriate automated testing
tool relies on an understanding of the distinctive
requirements of these different types of tests. Of the
array of testing tools available, some are specific to
performance testing or to data
-
driven m
ethods. Some
tools are limited to record/playback methods
, (the
unique challenges associated with record/playback
testing tools are discussed

in section 7.2.3
)

and others
are
optimized

for white box unit

and integration

testing.
Understanding the type of

test to be automated is
essential in matching it
s automation implementation

to
the appropriate test tool.

6

Writing
Effective
&

Efficient

Automated Tests

Automated test scripts must be

written
with good
programming practices in mind in order to reap any
benefit. The acronym RUMM can be used to describe
the following characteristics of effective and efficient
automated test scripts
:
R
eusable,
U
nderstandable,
M
aintainable and
M
odular.

(15)

6.1

Reusable T
ests

Automated tests should
be small

and focused on a single
task. Unit tests are small by nature as long as the
developer of the application under test has used
common modular design techniques. Other types of
tests can become large and unwieldly very quickly, and
i
f a test contai
ns a large

number of

steps and fails during
a test run, it will be difficult to determine where and
why the test failed or even if the failure was due to a
legitimate bug. Try to find logical divisions among test
steps
, like a set of steps that are
frequently repeated,

and
create separate test scripts for each of these divisions.

This method
is

especially applicable to functional,
system or acceptance tests.

For example, logging i
n to
an application is

an action that will
certainly
be repeated
in m
ultiple
functional
test cases. If your test requires
the

simulation of a user logging in

to the application

and
afterward

performing a given task, create separate
automated scripts for the login portion of the test and for
the task the user is to perform.

This way, the login
script can be called from other scripts simulating
different test cases

without including the same group of
steps repeatedly
.

The following categories are useful to
keep in mind when breaking a test down into its
fundamental parts

(16)
:




Navigation (e.g. "Access Payment Screen from
Main Menu")



Specific (Business) Function (e.g. "Post a
Payment")



Data Verification (e.g. "Verify Payment Updates
Current Balance")



Return Navigation (e.g. "Retu
rn to Main
Menu")


When writing reusable test scripts, it is important to
separate data from function. The use of variables and
external data maximize a scripts reusability. In
Zambelich’s white paper,
Totally Data
-
Driven
Automated Testing
, he describes
a testing m
ethodology
and architecture that relies

on external data called the
“Key
-
Word Driven” or “Test Plan Driven” method.

This method involves creating utility scripts that
perform each action in a test case. To better illustrate
this method, a sample test cas
e has been reproduced
here from Zambelich’s white paper.


"Post a Payment" Test Case:

1.

Access Payment Screen from Main Menu

2.

Post a Payment


7


3.

Verify Payment Updates Current Balance

4.

Return to Main Menu

5.

Access Account Summary Screen from Main Menu

6.

Verify Account Summary Updates

7.

Access Transaction History Screen from Account
Summary

8.

Verify Transaction History Updates

9.

Return to Main Menu


The automated test is driven by parsing a data file
created in Excel or some other spreadsheet program.
The fi
rst column in the spreadsheet contains a ‘key
-
word’ that identifies a utility script to run. The rest of
the columns are used as parameters that are passed to
the utility script. The spreadsheet is then saved in a tab
-
delimited format and the automated t
est is driven by
parsing this file. Appendix

D
includes an example of
a
spreadsheet that could be used to drive an automated
version of the “Post a Payment” test case documented
above.

6.2

Understandable T
ests

Under the pressure of the ticking clock, it is te
mpting
for a programmer to write scripts with little
documentation; however, understandable scripts are
essential to making an automated test effort efficient.
Developers must be able to understand the purpose of
existing utility scripts and subroutines i
n order to
effectively knit them together to satisfy a test case and
avoid the bulk of repetition. Note that the Key
-
Word
method described previously is almost
entirely
self
-
documenting. As discussed earlier, it is preferable to
write automated tests bas
ed on test cases that have
already been documented; however, there will be times
when the test case can actually be developed at the same
time it is automated. The Key
-
Word method is a prime
example.

When writing future test cases, a developer
can easily

pick out which utility scripts will be useful
and what actions need to be defined in additional
scripts.

Using good programming practices will also help to
keep scripts understandable. Avoid an architecture that
is difficult to conceptualize.
The Key
-
Word method
architecture contains four categories of scripts: a
driver

script which performs any needed initialization, a
controller

script which is called by the driver script and
steps through the external data file,
utility

scripts which
are cal
led by the controller when the controller
encounters a ‘key
-
word’ in the data file and which
process the given parameters, and user defined functions
or
subroutines

which can be called by any of the script
types to perform application specific actions.

Us
ing a
well
-
defined architecture
will help to keep automated
scripts organized so that test developers will be able to
follow them and automate new test cases easily.

6.3

Maintainable T
ests

Recognize that test automation development is software
development.

(17)

This
statement is one of the
strategies for success in automated test development in
Cem Kaner’s white paper,
Improving the
Maintainability of Automated Test Suites
. Kaner goes
on to explain that without clear and realistic

planning,
you can’t develop test suites that will have a low enough
maintenance cost to justify their existence over the life
of the project.

Scripts that are hard to maintain are
likely to be abandoned, or worse, take time away from
legitimate bug findi
ng activities.

Along with the clearly
defined architecture discussed previously,
there must be
standards to which the testing team adheres in order to
help keep test scripts maintainable. These standards
should include
an established naming convention fo
r
variables and functions
,
the same structure for
documenting their modules,
and
the same approach to
error

handling.

(17)

In general, if test scripts have been written following
the guidelines of reusability and understandability, they
will likely be easy to maintain. Keeping the
test code

generic and separate from the data will enable
developers to easily create new test ca
ses using
variations in the data. If test script libraries are well
documented, developers will be able to build new
automated tests easily using existing funct
ions and
maximizing the time and effort saved by reusing
existing code


an immediate gain in c
ost effectiveness.

In addition to good programming practice, scripts
representing test cases that do not depend on the results
of other test cases will help to keep the entire test suite
maintainable. If test cases are kept isolated from other
test cases
, they can be run in any order and new scripts
can easily be added in any order that follows the current
organizational scheme.

6.4

Modular T
ests

The importance of
the
modularity of
test scripts

cannot be over
-
emphasized
. Scripts that are written for
specific

tasks form the building blocks for complex tests
that will be easy to troubleshoot.

(15)

If a certain
function of an application under test changes, such as
the way a user logs in to the system, it is expected that
any test r
equiring the simulation of a user logging in to
the system will break. If the tests scripts are modular,
than modifying the module containing the code
necessary for logging in will be all that is needed.
Modularity supports the other three characteristic
s of
‘good’ test design already discussed.


7

Automation Techniques & Tools

In Pettichord’s
Seven Steps to Test Automation Success
,
he tells a story of test automation efforts gone awry
within a fictional software development firm. In this all
too
real

account of excitement at the prospect
s

of
automation dwindling to frustration

and
eventually
toward abandonment of automation efforts
, Pettichord

8


writes that one of the most common setbacks faced
when pursuing an automated testing project

is the use of

a

testing tool

ill suited to the software under test
.

(12)

Ideally, if automated testing is to be implemented in a
project, all development efforts should be aware of the
automated testing environment


in other words, the
software should be developed in such a way that it can
be tested

with a tool of choice
. I
n reality, software
development is primarily composed of activity related
to the maintenance and improvement of existing
software which may or may not have been developed
with automated testing in mind.

The following sections discuss several open
-
source
so
ftware testing tools currently available to developers.
The focus on open
-
source is
purposeful, in that these
testing frameworks can be modified by a programmer to
fit the needs of the project currently under test and are

arguably the

most amenable to tes
ting existing software.
Commercial testing tools have their advantages;
however, they often come with a higher learning curve,
a proprietary development language, and hard limits on
what
can

be tested. Most commercial testing tools have
detailed instruct
ions and suggestions on how to develop
software
with the anticipation of using the tool to
automate testing. In a situation like this, the tool
for
which the software was designed will

be the most
valuable to the
automation testing
project; however,
most
often, developers will be asked to maintain
and
improve existing software. This implies testing of
software that may have been developed without
automated testing in mind, and so the test team must
choose an appropriate test tool after the fact.

7.1

xUnit Tes
ting Frameworks

xUnit
is the generic name given to any Test Automation
Framework for unit testing that is patterned after JUnit
or SUnit

(18)
. There are existing unit testing
frameworks for almost every programming language
including but certainly not limited to: C, C++, Java,
Perl, Python and Ruby. A short review of some of the
better known members of the xUnit family of unit
testing frameworks

and xUnit extensions

can be found
in
Appendix
es

A

and
B
. The most popular unit
frameworks and currently the most valuable for a
developer to be familiar with include: JUnit, CppUnit,
NUnit, PyUnit and
the xUnit extension
XMLUnit

(19)
.

This section utilizes the knowledge and experience of
Paul Hamill shar
ed in his book
Unit Test Frameworks
.
Hamill details how frameworks are organized and
explores how the most popular xUnit testing
frameworks function.

7.1.1

YourUnit

In his book, Hamill begins by explaining how to write
your own unit testing framework in order
to appreciate
their po
tential

as well as to support a testing project that
does not have time allotted to set up an existing
framework or has requirements that cannot be met by an
existing framework, such as custom reporting or the
need for closer control
over how the unit tests function.
Also, the most popular existing frameworks generally
include much more than the minimum needed to
build
unit tests. The desire for simplicity may lead a
developer to write their own unit testing framework;
however, the d
esire for simplicity has also lead to the
availability of many simple frameworks online which a
developer can use as a starting point and customize to fit
their needs.

Armed with the knowledge of the basic architecture
of a framework, t
he concept of a unit

test framework
is
simple and straightforward.
The first step in building a
test framework is to create an abstract class that will
serve as the parent class for actual unit tests.

The
simplest parent class, which we will call
UnitTest
,
will include a si
ngle static integer that will be used to
track the number of successful tests, a
runTest
()

method that will be overridden
by all unit tests,
an
assert()

method that will test a condition
, and a
getNumSuccess()

method that will return the number
of successful tests
. The assert method will take in two
parameters, the condition to test and a string associated
with the condition that will be thrown if the condition is
FALSE
.

If the condition is
TRUE
, then the sta
tic integer
tracking the number of successful tests run will be
incremented.

Below is sample code for the
UnitTest

class.


public abstract class UnitTest {



protected static int num_test_success = 0;



public abstract void runTest( ) throws Exception
;



protected void
assertTrue(
boolean condition,
String msg
)



throws Exception {


if (!condition)


throw new Exception(msg);


num_test_success++;


}



public static int getNumSuccess( ) {


return num_test_success;


}


}


The next step is to create a unit test class, which we
will call
PaymentTest



a possible unit test that could
be associated with an accounting application similar to
the one tested by Zambelich with his Key
-
Word data
driven test discussed previously. (The
“Post a
Payment”

test case is a
functional

test, not a
unit

test by
our def
inition.)

This class can potentially include a
number of test cases all related to methods that deal
with payment data; but for now, the class includes one
method that serves to test the
Payment

object
constructor

and the
getAmount()

method
. The
PaymentT
est
()

method creates a
Payment

object,
initializing the payment amount and then verifies that
the payment amount has been set
and retrieved
correctly


9


from the
getAmount()

method
.

Here are the
PaymentTest

and
Payment

classes.


public class PaymentTest
extends UnitTest {



public void runTest( ) throws Exception {


Payment payment = new Payment(15.87);


assert(payment.getAmount()
.equals(15.87),
"Actual
amount does not equal expected amount.");


}


}


public class Payment {



private Doubl
e amount = 0;



Payment(Double amount) {


this.title = title;


}



public Double getAmount() {


return payment.amount;


}

}


The last step needed to complete this simple test
framework is the creation of the
TestRunner

class.
This is the class that contains the
main()

method for
the test framework and will actually run the unit te
st.
All unit tests are run within a Try..Catch block
, and if a
unit test fails, the exception is caught, the failure is
reported and the sta
ck trace is printed to the console.
TestRunner

will also report the number of tests that
have passed successfully.


public class TestRunner {



public static void main(String[] args) {


TestRunner tester = new TestRunner( );


}



public TestRun
ner( ) {


try {


UnitTest test = new PaymentTest( );


test.runTest( );


System.out.println("SUCCESS!");


}


catch (Exception e) {


e.printStackTrace( );


System.out.println("FAILURE!");


}


System.out.println( UnitTest.getNumSuccess( ) +


" tests completed successfully." );


}

}


It is important to note that in running automated tests
such as this, there is a difference between ‘errors’ and
‘failures’
.
Unit testing failure
s indicate that a condition
in a test has evaluated to false
. Failures are intended to
catch bugs in the system under test
. Errors are
unexpected problems such as uncaught exceptions.
More sophisticated frameworks will
make detailed
distinctions between errors and failures and even report
certain types of errors as failures. In addition, errors
and failures are often reported separately. It is helpful
to know what types of exceptions the framework you
are using already

handles so that you don’t reinvent the
wheel by writing tests for conditions that the framework
already deals with effectively.
How the framework
handles errors will be more useful to know when
integrating functional testing frameworks such as
Watir

or
S
elenium

along with your chosen unit testing
framework.

7.1.2

xUnit Architecture

xUnit frameworks all utilize essentially the same
archite
cture, patterns and concepts and t
hey all
generally include the same set of key classes:
TestCase, TestRunner, TestFixture,
T
estSuite, and TestResult

as well as an interface
Test

which both
TestCase

and
TestSuite

implement.

There are some similarities between the
xUnit architecture and the sample test unit framework
that was created in section 7.1.1, but the xUnit
architecture i
s

a

more so
phisticated and robust

version
.

The classes
TestCase

and
TestRunner

are both
directly related
to the classes
UnitTest

and
TestRunner

respectively
that we
re

defined earlier

in
our simple framework
.
TestCase

is an abstract class
from which all unit tests are inherited.


Using our
previous example,
PaymentTest

would be classified as
a
TestCase
. In our simple framework,
PaymentTest

was run by creating the
TestRunner

object which
resulted in running
PaymentTe
st

within a Try..Catch
block.
It is important to note that in the

xUnit

architecture
,
TestCase

implements the
Test

interface
and
the test case is passed to
TestRunner

as
a
Test

object.

TestRunner

then runs the test case. In xUnit, a
TestCase

can include many different test methods that
test each funct
ion of a class separately. For example,
suppose that we had a
Customer

class that contained
definitions for a
makePayment
(
)

method and a
getTotalPaid
(
)

method
. We
may

want to create a
class
Customer
Payment
Test

that includes

methods to
test
both of

th
ese functions related to Payments
. Within
Customer
Payment
Test
, we would define
two test
methods:

testMakePayment
(
)

and

testGetTotalPaid
(
)
.

In xUnit,
TestRunner

is designed to run all methods
with
in the
Test

object
it receives

as

a parameter that
begin with the word ‘test.’

The
Test

interface allows
the functionality of
TestRunner

to be extended to
handle

TestSuite

objects which also implement
s

the
Test

interface.
TestSuite

is a class that represents a
collection of
TestCases

or even other
TestSuite

objects

but in essence is treated by
TestRunner

the
same as an individual
TestCase
. Test cases can be
added to a
TestSuite

by using its method
addTest
(
)
.

We must elaborate furth
er on the
CustomerPaymentTest

class in order to introduce the
next key class of xUnit, the
TestFixture

class. The
CustomerPaymentTest

class is a typical
TestClass

containing more than one method; however, there are
some repetitive actions it would be nice to consolidate.
Spe
ci
fically, in order to test the
makePayment()


10


function, a number of
Payment

objects must be created
and the amount verified. To test the
getT
otalPaid()

function, a number of
Payment

objects must also be
created.

Ideally, duplication of code like this should be
avoided. Built into the xUnit architecture is a way to
handle a scenario like this.
TestCase

inherits from
TestFixture

which is a cla
ss that contains methods
called
setup()

and
teardown().
Our
CustomerPaymentTest

class can override either or
both of these methods and thus behave as a
TestFixture

object. Including the
setup()

method
in the
CustomerPaymentTest

class will enable us to
cr
eate the needed
Payment

objects necessary to run all
of the test methods.
Intuitively, introducing a
teardown()

function would serve the same purpose as
setup()

except instead of performing common actions
at the beginning of each test, it would perform co
mmon
actions needed at the end of each test. Operating as a
TestFixture
, t
he order of execution
in the
CustomerPaymentTest

class will be as follows:

setup()

testMakePayment()

teardown()

setup()

testGetTotalPaid()

teardown()

The
setup()

and teardown methods not only solve the
code duplication problem,
they ensure that each test
method is isolated from the others. Also, if a test
method fails, the
TestRunner

object can ensure that
any needed cleanup is done by running the
teardown()

meth
od before beginning the next test.

The last key class of xUnit is the
TestResults

class.

The Test interface contains a
run()

method which
receives a
TestResult

object as a parameter.
TestResult
, as its name implies, has the sole purpose
of accumulating r
esults.
The methods and static
variables in
TestResult

include:
addError(),

addFailure(), int errorCount, int
failureCount,

and
int

runCount
.
Every time a test
method is run, the
TestResult

object is passed in to
collect
detailed

results

regarding success or failure of
the test
.

The results for each test accumulate in the
TestResult

object and w
hen the
TestRunner

has
completed proc
essing all of the test methods,
the
TestResult

object
is used
to report all of the failures,
errors and
each

associated stack trace in order to assist
in debugging.

The flexibility of the xUnit architecture
is one of its
finest features.

Mock objects (discussed in section ?)
can be integrated into an xUnit framework
as well as

performance testing which can meas
ure the efficiency
of algorithms used in the code. Framework methods
such as the assert methods of
TestCase

can be easily
extended to provide custom functionality.
The
architecture of xUnit has been so successful and

has
become

so widely known that
its

concepts have spread
rapidly to include types of testing beyond the scope of
the ‘classic’ unit test
, including
functional, system and
acceptance testing.

There is also the availability of
plug
-
ins to IDEs such as Eclipse to assist the
development of aut
omated tests using xUnit
frameworks.
The possibilities for xUnit are truly
unlimited.

7.1.3

Desktop GUI Testing

with xUnit

The increasing complexity of GUIs have introduced
some interesting dilemmas in the realm of automated
testing. Automated GUI testing is r
elatively
complicated when it comes to responding to user
initiated events such as mouse clicks, etc. This
complication extends to the testing of GUI
s

displayed
through a browser which is discussed in section
7.2
.

xUnit frameworks can
certainly
be used t
o test GUIs;
however, different strategies must be employed

and
whether or not automated testing of the GUI is viable
must be carefully considered
.

The advantages of
automated testing can quickly become overshadowed by
the time requirements that
delicate
GUI testing can have



especially when the GUI was not designed with
automated testing in mind
.

Testing a GUI involves testing
the elements

including
buttons, text boxes and windows

that make up
the
interf
ace and simulating

user’s interaction with them.


A GUI object can potentially respond in any number of
ways to user interaction by becoming disabled, enabled,
visible, invisible, highlighted, displaying values, etc.
Desktop GUI elements such as dialogs

are often
designed with toolkits

that intend to

make the job easier;
however, these toolkits often include application logic
within the GUI element that is hard to separate from the
GUI behavior for testing purposes.

Ideally,
a

GUI

object should not contain any functional state, data
validation, or ot
her nontrivial logic.

(19)

All of the
business logic
for GUI elements
should reside in
‘smart
objects’ that handle the functional behavior of the GUI
element and can be easily unit tested just like any other
class.

(20)

If this is the case, then testing the GUI object
would entail simply verifying the creation, display, and
layout of the GUI object as well as
text values, which
can all be unit tested relatively easily.

7.1.4

Database Testing

and Mock Objects

with
xUnit


When there is any way to test without a database, test
without the database!


Gerard Meszaros


11


On average, tests that require database interaction run
two orders of magnitude slower than the same tests that
run without a
database.

(18)

Also, there are times when
a testing database simply isn’t available either because
of a lack of resources or it simply hasn’t been developed
yet. Mock objects are an elegant way of dealing with
situations like

this and are not limited to databases
alone. Mock objects are simulations of real external
objects including web servers, network services,
hardware devices and databases.

(19)

Mocks simulate
the behavior of an external obje
ct as much as needed for
the purpose of testing
,

and

then

validate that the code
using them
is calling the right methods, in the right
order, with the right parameters.

This validation sets
apart
mocks

from
stubs

which are objects that simply
serve as stand
-
ins and do not perform any validation.


Mocking a database avoids the problems associated
with using persistent data in automated testing as well
as the overhead of dealing with an actual database
engine. Thi
s section focuses on databases specifically,
but the same principles apply to other external objects.

A simple
implementation

of a

mock object
for a
database would include an interface
(
DBConnection
)
that currently includes all methods for interacting with
a

real database including connecting to and closing an
existing connection, a class (
CustomerDB
) under test
that requires int
eraction with a database, and the
mocked version (
MockConnection
) of
DBC
onnection

that implements the interface so that it can be used in
place of
DBConnection
. The
open()

and
close()

methods of
MockConnection

can be used to set flags
indicating that the functions were in fact called.
MockConnection

should include an additio
nal method
called by the test case that returns a value indicating
whether the mocked database is in the correct state


in
this case, that both the open and close methods were
called.

Mocking a database can get complicated when
considering the possible us
e of
stored pr
ocedures in the
real database and

the
automatic generation of id
numbers

or data validation that a real database provides
.
If t
h
e application under test relies on

functionality
like
this, then methods that model this functionality
must be
included in the mock object in order to effectively
simulate the activity of the real database and any side
-
effects that may result. Despite these complexities,
mock objects are an effective strategy for the automated
testing of interoperability b
etween internal and external
objects. There are
a few

tools available to assist with
creating mock objects
; however, the most popular
(jMock, MockRunner and EasyMock) are limited to use
with a testing framework that is written in Java.

Now that the case f
or mock objects has been made, it
must be noted that
it will eventually become

necessary
to
run
test
s

with the real database
; however, with a
reliable mock object in place, only a small number of
sanity tests with the real database will be required.

7.2

Web
Application Testing

Writing automated functional and acceptance tests for
web applications has a unique set of challenges.
As in
desktop GUI testing, the biggest challenges exist in
dealing with GUI objects, in

this case elements that are
rendered within
a browser. Every element in the GUI
must have some sort of identifier in order for a test tool
to recognize it and interact with it. There are ways to
locate elements based on where they exist in the
hierarchical structure of the page, but if this struct
ure
were to change (which is frequently

the case), the
element will not be found by any test script that refers to
it, which will undoubtedly lead to a cascade of failing
tests. Other complications that arise with testing
browser
-
based applications are ti
ming issues due to
hardware or network performance and the complexity
introduced by web technologies such as Ajax and
Dynamic HTML.

As already discussed in the section on desktop GUI
testing, an ideal situation is one where the application
was designed for

automated testing but in reality, this is
often not the case. Fortunately, tools that assist in the
development of testing web applications have improved
dramatically and there are some very good options for
those that wish to pursue an automated avenue.

There
are also some tools that if used in the wrong
manner
,
will be detrimental to the automated testing effort.

7.2.1

Wa
tir

Watir stands for Web Application Testing in Ruby
.

(21)

As the name implies, this framework contains a library
of classes and methods written in Ruby that are used to
test web applications. Ruby is a language that has
quickly gained popularity especially after the release of
Ruby on Rails, a framework that h
as proven to be a
breakthrough in the rapid development of web
applications based on database interaction. Despite
being written in
R
uby, Watir is not specific to testing
web applications written in Ruby.
Although Watir
requires Ruby to be installed in o
rder to run, it

will work
on any applic
ation that serves up html pages.

Watir scripts can be written as standalone text files
saved with an .rb extension and run in the command line
or integrated with Test::Unit, the Ruby member of the
family of xUnit fram
eworks.

Below is a simple

12


implementation of Watir within the Test::Unit
framework.


require 'test/unit'

require 'watir'


class GoogleHomePage < Test::Unit::TestCase


def test_there_should_be_text_About_Google


ie=Watir::IE.start "http://www.google.com"


assert(ie.text.include?("About Google"))


end

end


Page elements can be identified by a number of
different parameters including name, id,
a combination
of attributes or even a regular expression. The attributes
that can be used for identification are

specific to the
element type
.

The following is an example of Watir
syntax:


ie.frame(:name, "menu").link(:text, "Click Menu Item").click


This command will simulate a click on the link “Click
Menu Item” located in a frame named “menu” that is
contained
in the current browser window.

A significant advantage of
Watir is
that it is
highly
flexible and
customizable
. I
f scripts are written with the
characteristics of ‘good’ tests that have been
previously
discussed in this paper, then
Watir proves to be a ro
bust
and time
-
saving tool. As you can see from the syntax, it
is also almost entirely self
-
documenting. The biggest
weakness that Watir has right now is its limited support
of browsers.

Watir is limited to Internet Explorer 6 and
7

and to Windows operati
ng systems
.

FireWatir is

another test tool that claims to be

Watir’s counterpart
for use in the Firefox browser
, but the development
efforts of these two tools are independent of each other
.

As an active open
-
source project, Watir is very
promising web a
pplication functional test tool.

7.2.2

Selenium

Selenium is
a web application acceptance testing tool
that was developed by ThoughtWorks,
a leader in Agile
development methods for enterprise software
development
.

(22)

Like Watir, Selenium is an in
-
browser acceptance tool.
Selenium is more powerful
than Watir in that it supports multiple browsers
and
operating systems
including Internet Explorer,
Mozilla
and Firefox on Windows, Linux and Mac and Safari
on
the Mac
.
The

developers of Selenium are currently
working on compatibility with mobile Safari as well.
Unlike Watir, Selenium tests can be written in several
different languages. Currently, selenium libraries are
available in Java, Ruby, Python, Perl, PHP and .Net a
nd
Selenium tests can be integrated into xUnit frameworks.
Below is an example of a Selenium test integrated with
JUnit.


import com.thoughtworks.selenium.*;

import junit.framework.*;

public class GoogleTest extends TestCase {


private Selenium browser
;


public void setUp() {


browser=new
DefaultSelenium("
localhost",


4444, "*firefox", "http://www.google.com");


browser.start();


}




public void testGoogle() {


browser.open("http://www.google.com/webhp?hl=en");


browser.type("q", "hello world");


browser.click("btnG");


browser.waitForPageToLoad("5000");


assertEquals("hello world
-

Google Search",



browser.getTitle());


}




public void tearDown() {


browser.stop();


}

}


A screenshot of the Selenium TestRunner is shown in
Appendix
F
.

7.2.3

Record & Playback

There is a plethora

of automated test tools that involve
recording actions perfo
rmed in a user interface. Testing
of the interface is then implemented by simply playing
back the recording. Although many improvements have
been made to the record and playback technique, it still
suffers from fundamental flaws and there are many who
ar
gue it is a technique to be abandoned.
The following
are some of the reasons why tools that use the record
and playback are not considered cost
-
effective.

(16)




The scripts resulting from this method contain
hard
-
coded values

which must change if anything
at all changes in the application.



The costs associated w
ith maintaining such scripts
are not worth their value
.



These scripts
often fail on replay due to
pop
-
up
windows, messages, and
timing issues.




If the tester makes an error entering data, etc., the
test must be re
-
recorded.



If the application changes

at all
, the test must be re
-
r
ecorded.


The argument for record and playback software is that
tests can easily be created and run by staff that is no
t
necessarily technically skilled, therefore reducing cost.
It turns out, however, that many of these software
products often have a high learning curve and may even
require specialized tr
aining to be used effectively.


8

Conclusion

It is important not to
get caught up in the excitement
of automated testing without first considering the
discipline required to actually reap the benefits.

Automated test scripts are themselves pieces of software

13


that require programming knowledge and should model
good program
ming practices. If not, they can fall into
disarray as easily as the product under test if it were not
maintainable and there are plenty of stories of test
automation abandoned because of lack of planning or
understanding of what it involves. Repeatable,

understandable, maintainable and modular test scripts
have been found to provide the most return on an
automated testing investment. Supporting these
characteristics there is a well defined architecture that
exists among xUnit frameworks and these framew
orks
can be implemented in practically any language.
Utilizing a test framework effectively is essential to
success with automated testing.



Although automated testing can make testing more
interesting, it is not a solution to every test requirement.
A
utomated testing does not completely eliminate the
human aspect in ensuring the quality of a software
product. There are certain kinds of tests that are simply
too complex for automation and the results from
automated tests will still require analysis and

a decision
on how to proceed.
There are three primary goals of
automated testing


increased
efficiency, consistency
and frequency
. Recognizing how automation can
achieve these goals can help determine what tests
should be automated and for what tests a
utomation
would provide
little

benefit

if any

to the testing effort.

9

References

1.
Robinson, Neil.

Life, the Universe, and Testing.
parallax.blogs.com.
[Online] Dec. 29, 2005. [Cited: 1 20, 2008.]
http://parallax.blogs.com/parallax_calculating_tech/2005/12/life_the_
univer.html.

2.
Shebanow, Andrew.

A Brief History of Test Frameworks.
Shebanation.
[Online] August 21, 2007. [Cited: March 9, 2008.]
http://shebanation.com/2007/08/21/a
-
brief
-
history
-
of
-
t
est
-
frameworks/.

3.
Camp Smalltalk SUnit.
[Online] [Cited: March 8, 2008.]
http://sunit.sourceforge.net.

4.
Fowler, Martin.

Bliki: Xunit.
Martin Fowler.
[Online] [Cited:
March 6, 2008.] http://www.martinfowler.com/bliki/Xunit.html.

5.
Guillemot, Marc and
Konig, Dierk.

Web Testing Made Easy.
Portland

: OOPSLA'06, 2006. ACM 1
-
59593
-
491
-
X/06/0010.

6.
Pettichord, Bret.

Success with Test Automation. [Online] June 28,
2001. [Cited: March 25, 2008.]
http://www.io.com/~wazmo/succpap.htm.

7.
Duffie, Kingston.

Why A
utomate Testing?
Embedded Computing
Design Resource Guide.
August 2006.

8.
Hewlett
-
Packard Development Company, L.P.

Best practices for
implementing automated functional testing solutions. [Whitepaper].
April 2007.

9.
Kelly, Dave.

Software Test Automation
and the Product Life Cycle.
MacTech Magazine.
October 1997, Vol. 13, 10.

10.
Black, Rex.

Pragmatic Software Testing.
Indianapolis

: Wiley
Publishing, Inc., 2007.

11.
Fowler, Martin.

MartinFowler.com.
Continuous Integration.
[Online] May 1, 2006. [Cited: Ma
rch 22, 2008.]
http://www.martinfowler.com/articles/continuousIntegration.html.

12.
Pettichord, Bret.

Seven Steps to Test Automation Success.
[Online] June 26, 2001. [Cited: March 25, 2008.]
http://www.io.com/~wazmo/papers/seven_steps.html.

13.
McNaughton,

Allan.

Avoiding the Pitfalls of Automated Testing.
Mason, OH, United States of America

: s.n., 2007.

14.
Marick, Brian.

Testing.com. [Online] 1998. [Cited: March 25,
2008.] http://www.testing.com/writings/automate.pdf.

15.
Seapine Software, Inc.

Automated Testing Best Practices.
Mason,
OH, United States of America

: s.n., September 2007.

16.
Zambelich, Keith.

Totally Data
-
Driven Automated Testing.
Automated Testing Specialists.
[Online] 1998. [Cited: March 26,
2008.] http://www.sqa
-
test.com/w_pape
r1.html.

17.
Kaner, Cem.

Improving the Maintainability of Automated Test
Suites.
Cem Kaner.
[Online] 1997. [Cited: March 22, 2008.]
http://www.kaner.com/pdfs/autosqa.pdf.

18.
Meszaros, Gerard.

XUnit test patterns : refactoring test code.
Boston

: Pearson E
ducation, Inc., 2007.

19.
Hamill, Paul.

Unit Test Frameworks.
Sebastopol

: O'Reilly
Media, Inc., 2004.

20.
Feathers, Michael.

Object Mentor.
[Online] 2006. [Cited: Feb 29,
2008.]
http://www.objectmentor.com/resources/articles/TheHumbleDialogBo
x.pdf.

21.
Ko
hl, Jonathan and Filipin, Zeljko.

Watir Tutorial.
Open QA.
[Online] 2008. [Cited: February 22, 2008.]
http://wiki.openqa.org/display/WTR/Tutorial.

22. Open QA.
Selenium Overview.
[Online] 2008. [Cited: Feb 22,
2008.] http://selenium
-
rc.openqa.org/.

23. Lea
d QA
-

Automating Testing.
ComputerJobs.com.
[Online]
[Cited: March 9, 2008.]
http://www.computerjobs.com/job_display.aspx?jobid=2069993&sitei
d=139&sort=pd&view=s&searchid=116405438&page=1&published=
.







14


10

A
PPENDICES

10.1

APPENDIX
A
:
Some
xUnit Family Members

(18)

Note: A comprehensive list of testing software can be found at http://xprogramming.com/software.htm
.


ABAP Object Unit

The member of the xUnit family for SAP's ABAP programming language. ABAP Object Unit is more or less a
direct port of JUnit to ABAP except for the fact that it cannot catch exceptions encountered within the system under
test (SUT).

ABAP Object Unit is ava
ilable for download at
http://www.abapunittests.com
, along with articles about
unit testing in ABAP. See ABAP Unit for versions of SAP/ABAP starting with 6.40.



ABAP Unit

The member of the xUnit family for versions of SAP's ABAP programming language start
ing with Basis version
6.40 (NetWeaver 2004s). The most notable aspect of ABAP Unit is its special support that allows tests to be stripped
from the code as the code is "transported" from the acceptance test environment to the production environment.

ABAP
Unit is available directly from SAP AG as part of the NetWeaver 2004s development tools. More
information on unit testing in ABAP is available in the SAP documentation and from
http://www.abapunittests.com
.
See ABAP Object Unit for versions of SAP/ABAP pri
or to Basis version 6.40 (NetWeaver 2004s).



CppUnit

The member of the xUnit family for the C++ programming language. It is available for download from
http://cppunit.sourceforge.net
.

Another option for some .NET programmers is NUnit.


CsUnit

The member of the xUnit family for the C# programming language. It is available from
http://www.csunit.org
.
Another option for .NET programmers is NUnit.


CUnit

The member of the xUnit family for the C programming language. Details can be found at
http://cunit.sourceforge.net/doc/index.html
.


DbUnit


An extension of the JUnit framework intended to simplify testing of databases. It can be downloaded from
http://www.dbunit.org/
.


IeUnit

The member of the xUnit family for testing Web pages rendered in
Microsoft's Internet Explorer browser using
JavaScript and DHTML. It can be downloaded from
http://ieunit.sourceforge.net/
.


JBehave

One of the first of a new generation of xUnit members designed to make tests written as part of TDD more useful
Tests as Sp
ecification. The main difference between JBehave and more traditional members of the xUnit family is
that JBehave eschews the "test" terminology and replaces it with terms more appropriate for specification

that is,
"fixture" becomes "context," "assert" be
comes "should," and so on. JBehave is available at
http://jbehave.codehaus.org
. RSpec is the Ruby equivalent.


JUnit

The member of the xUnit family for the Java programming language. JUnit was rewritten in late 2005 to take
advantage of the annotations int
roduced in Java 1.5. It can be downloaded from
http://www.junit.org
.


15


MbUnit


The xUnit family member for the C# programming language. MbUnit's main claim to fame is its direct support for
Parameterized Tests
.

It is available from
http://www.nunit.orgmbun
it.com
. Other options for .NET programmers
include NUnit, CsUnit, and MSTest.

MSTestMicrosoft's member of xUnit family does not seem to have a formal
name other than its namespace
Microsoft.VisualStudio.TestTools.UnitTesting

but most people refer to it as MSTest.
Technically, it is just the name of the Command
-
Line Test Runner mstest.exe. MSTest's main claim to fame is that it
ships with Visual Studio 2005 Team System. It does not appear to be available in the less expensive
versions of
Visual Studio or for free download. MSTest includes a number of innovative features, such as direct support for
Data
-
Driven Tests
. Information is available on MSDN at
http://msdn.microsoft.com/en
-
us/library/ms182516.aspx
.
Other (and cheaper) op
tions for .NET programmers include NUnit, CsUnit, and MbUnit.


NUnit

The member of the xUnit family for the .NET programming languages. It is available from
http://www.nunit.org
.
Other options for C# programmers include CsUnit, MbUnit, and MSTest.


PHPUnit

The member of the xUnit family for the PHP programming language. According to Sebastian Bergmann, "PHPUnit
is a complete port of JUnit 3.8. On top of this original feature set it adds out
-
of
-
the
-
box support for Mock Objects,
Code Coverage, Agile Documenta
tion, and Incomplete and Skipped Tests." More information about PHPUnit can
be found at
http://www.phpunit.de
, including the free book on PHPUnit.


PyUnit

The member of the xUnit family written to support Python programmers. It is a full port of JUnit. Mor
e information
can be found at
http://pyunit.sourceforge.net/
.


RSpec

One of the first of a new generation of xUnit members designed to make tests written as part of TDD more useful
Tests as Specification. The main difference between RSpec

and more traditional members of the xUnit family is that
RSpec eschews the "test" terminology and replaces it with terms more appropriate for specification

for example,
"fixture" becomes "context,"
Test Methods

becomes "specify," "assert" becomes "should,
" and so on. RSpec is
available at
http://rspec.rubyforge.org
. JBehave is the Java equivalent.


R
unit

One member of the xUnit family for the Ruby programming language. It is a wrapper on Test::Unit that adds
additional functionality. It is available at
www
.rubypeople.org
.


SUnit

The self
-
proclaimed "mother of all unit
-
testing frameworks." SUnit is the member of the xUnit family for the
Smalltalk programming language. It is available for download at
http://sunit.sourceforge.net
.


Test::Unit

The member of the

xUnit family for the Ruby programming language. It is available for download from
http://www.rubypeople.org

and comes as part of the "Ruby Development Tools" feature for the Eclipse IDE
framework.


TestNG

A member of the xUnit family for Java that behaves

a bit differently from JUnit. TestNG specifically supports
dependencies between tests and the sharing of the test fixture between
Test Methods
. More information is available
at
http://testng.org
.


16


utPLSQL

The member of the xUnit family for the PLSQL database programming language.

You can get more information
and download the source for this tool at
http://utplsql.sourceforge.net/
.
A plug
-
in that integrates utPLSQL into the
Oracle toolset is available at
h
ttp://www.ounit.com
.


VB Lite Unit

Another member of the xUnit family written to support Visual Basic and VBA (Visual Basic for Applications).


"VB Lite Unit is a reliable, lightweight unit
-
testing tool for Visual Basic and VBA written by Steve Jorgensen.


The
driving principle behind VB Lite Unit was to create the simplest, most reliable unit
-
testing tool possible that would
still do everything that usually matters for doing test
-
driven development in VB 6 or VBA.

Things that don't work
or don't work reli
ably in VB and VBA are avoided, such as attempts at introspection to identify the test methods."
Another option for VB and VBA programmers is VbUnit. For VB.NET programmers, options include NUnit,
CsUnit, and MbUnit.


VbUnit

The member of the xUnit family
written to support Visual Basic 6.0. It was the first member of the xUnit family to
support
Suite Fixture Setup

and introduced the concept of calling a
Testcase Class

"test fixture."

One major quirk of
VbUnit is that when an
Assertion Method

fails the tes
t, it writes the messages into the failure log immediately rather
than just raising an error that is then caught by the
Test Runner
. The practical implication of this behavior is that it
becomes difficult to test
Custom Assertions

because the messages in t
he logs are not prevented by the normal
Expected Exception Test construct. The work
-
around is to run the Custom Assertion Tests inside an "Encapsulated
Test Runner."

Another quirk is that VbUnit is one of the few members of the xUnit family that is not fr
ee (as in
beer). It is available from
http://www.vbunit.org
. There used to be a free version available

who knows, it may
reappear some day. Another option for VB and VBA programmers is VB Lite Unit. For VB.NET programmers,
options include NUnit, CsUnit, an
d MbUnit
.




17


10.2

APPENDIX
B
:
Some xUnit extensions

(19)


XMLUnit

An xUnit extension to support XML testing. Versions exist as extensions to both JUnit and NUnit.


JUnitPerf

A JUnit extension that supports writing code performance and scalability tests. It is written in and used with Java.


Cactus

A JUnit extension for unit testing server
-
side code such as servlets, JSPs, or EJBs. It is written in and used with
Java.


JFCUnit

A JUnit extension that supports writing GUI tests for Java Swing applications. It is written in and used with Java.


NUnitForms

An NUnit extension that supports GUI tests of Windows Forms applications. It is written in C# and can be used with
any .NET lang
uage.


HTMLUnit

An extension to JUnit that tests web
-
based applications. It simulates a web browser, and is oriented towards writing
tests that deal with HTML pages.


HTTPUnit

Another JUnit extension that tests web
-
based applications. It is oriented toward
s writing tests that deal with HTTP
request and response objects.


Jester

A helpful extension to JUnit that automatically finds and reports code that is not covered by unit tests. Versions exist
for Python (Pester) and NUnit (Nester). Many other code
coverage tools with similar functionality exist.




18


10.3

A
PPENDIX

C
:
Employment Advertisement for Automated Tester
.

(23)





Lead QA
-

Automated Testing


We are looking for two QA Leads for a CTH or Fulltime position near downtown Dallas. You will be on the ground floor
of this newly created QA Department for an established industry leader. This is the perfect opportunity for the right
candidate.


Both tea
m leads need to have a wide variety of testing experience and 6
-
8 years of experience. Candidates must have
at least 2 years of lead experience where you have lead and directed day
-
to
-
day testing activities and report status to
the Manager. One person will

lead a team of manual testers the other will lead the team of automated testers. The
manual test lead must have a familiarity with test management tool, Quality Center. The Automation Test lead must
have deep knowledge and experience with Quality Center,
QTP and Load Runner. Mercury Certified is highly desired.
Both leads need to have some experience with a version control tool or a willingness to learn. You also must be
excellent at interacting with people and extracting the correct information. You will
be writing test cases and test scripts.


--------------------------------------------------------------------------------


Required Skills:

REQUIRED:


6+ years software testing


2+ years leading software testing


Mercury Quality Center


Combination of QTP,

Load Runner, Test Director, etc.


PREFERRED:


Mercury Certification (STRONGLY PREFERRED)


Testing in .NET Environment


Testing in AS/400 Environment


Source Control Tools (eg, SourceSafe)


Homebuilding software


Benefits:

Health Insurance, Life Insurance,

Dental Insurance, Disability Insurance, Paid Vacation, Paid Sick Leave, 401(k),
Tuition Reimbursement, Paid Training, Flex Time, Casual Dress, Flexible Spending Account, Vision Insurance


Industry:

Manufacturing


Pay Rate: $75,000.00
-

$95,000.00

Emp. T
ype: Full Time

Travel: No Travel

Location: Dallas, TX

Overtime Pay: Straight Time

Job Number: R093
-
6377

Date Posted: Wednesday, April 16, 2008



GoTechNow



Attention: Recruiter

Call: please email

Fax: please email

Email: Send an email to
jobsbb@gotechnow.com





19


10.4

APPENDIX D
:
Sample
spreadsheet

for a “
Key
-
Word


or

Test Plan Driven
” test


COLUMN 1


Key_Word

COLUMN 2


Field/Screen Name

COLUMN 3


Input/Verification Data

COLUMN 4


Comment

COLUMN 5


Pass/Fail











Start_Test:

Screen

Main

Menu

Verify Starting Point













Enter:

Selection

3

Select Payment Option













Action:

Press_Key

F4

Access Payment Screen













Verify:

Screen

Payment Posting

Verify Screen accessed













Enter:

Payment Amount

125.87

Enter

Payment data





Payment Method

Check















Action:

Press_Key

F9

Process Payment













Verify:

Screen

Payment Screen

Verify screen remains













Verify_Data:

Payment Amount

$ 125.87

Verify updated data





Current Balance

$1,309.77







Status Message

Payment Posted















Action:

Press_Key

F12

Return to Main Menu













Verify:

Screen

Main Menu

Verify return to Menu




The data in
red

indicates what would need to be changed if one were to copy this test case to create additional tests.

Test example reproduced from
Totally Data
-
Driven Testing
by Keith Zambelich




20


10.5

APPENDIX F
:
The Selenium TestRunner






21


10.6

APPENDIX G
:
Review Questions


1)

What are some of the
main
reasons that automated testing is pursued by a software development team?


The primary goals of automated testing are increased efficiency relating to savings in time and
resources, consistency which implies bugs will be caught if

introduced, and the ability to run tests
frequently which allow developers to utilize continuous integration techniques.


2)

What types of tests would be considered good candidates for automation?


Tests that
are repetitive, tests that require a large amount

of data entry, and low
-
level unit tests should
all be considered for automation.


3)

What are the six key classes that form the xUnit architecture?


xUnit frameworks all utilize essentially the same architecture, patterns and concepts and they all
generally include the same set of key classes:
TestCase, TestRunner, TestFixture, TestSuite, and TestResult

as well as an interface
Test

which both
TestCase

and
TestSuite

implement.


4)

What does the acronym RUMM stand for and how does it relate to automated testing?


R


Repeatable, U


Understandable, M


Maintainable, M


Modular

These are characteristics that automated test scripts should have in order to br
ing the most value to an a
development project.


5)

Is a record and playback method a good choice for automating tests involving a GUI? Why or why not?

In general, the record and playback method of developing test scripts has not been very reliable or
effici
ent. A recorded script depends on every element of the GUI to remain static. If any changes are
made, the test will likely have to be re
-
recorded. Also, recorded scripts do not deal
well
with
changes in
the environment
or timing issues.