An Automated Test Design Challenge with Watir and Rasta

toaststicksrifleSoftware and s/w Development

Aug 15, 2012 (4 years and 10 months ago)

291 views

A Data
-
driven Automated
Test Design
Using
Watir and Rasta


Neil Gregg Yows III, MSE, PMP

4901 Travis Country Circle

Austi
n, TX 78735 USA




Abstract


In order to remain flexible to change and
provide the
scalability necessary

to drive hundreds of variations of
the same test case though
a
system under test,
t
est
automation which drive
s

software applications from th
e
presentation layer must be designed such that variable
data and application object information is sep
arated
from the automation code.
Because of its flexibility, rich
library of modules and enthusiastic user community,
Ruby, an open
-
source, pure object
-
o
riented and
interpreted development language, has been adopted by
many

automation engineers in

the Software Quality
Assurance

industry
.
Ruby extensions
like Watir and
Rasta have been created by the open
-
source community
to meet these requirements for test
frameworks with
minimal effort and zero cost. This paper describes just
one way to leverage these open
-
source Ruby “gems”, as
well as the OO properties of Ruby, to quickly create a
flexible and scalable automated test suite.



I.

Introduction



G
iven the

time and effort
many
organization
s now
devote to test automation,
software test automation
engineers
must
frequently
ask
themselves if they are
getting the most test coverage possible out of the time
spent developing
their
automated
test
solutions.


Many
times, I have worked the “prob
lem” of driving our
various AUT (Application(s) Under Test)


through their
“green path”
only to realize that after successfully
accomplishing this task, I had not created very many real
tests! Sure, driving the application its
elf through various
high
-
level use cases is a valuable form of regression. But
I tend to get so hung up in making our functional test
framework flexible and scalable that sometimes I have to
stop and remind myself to take advantage of the
opportunities tha
t
my whiz
-
bang test design has
provided
.


As automated test developers,
we need to be
dig
ging

deeper into the various parts of the AUT whenever it is
practical to do so. The flexible tools we create need to be
leveraged for all they are worth, not jus
t appreciated for
their glorious agility. Neither the developers nor the
management team care how elegant and flexible our
frameworks are. As another colleague of mine, Jim
Mathews, likes to say “They just want to see lots of
“green” cells in a spreadsheet
”.


II.

Application Under Test



For the remainder of this paper, I will discuss various
thought processes
, decisions

and automation solutions as
they pertain to a particular Website

that
I
describe in this
section
.


My company sells insurance policie
s.
Our primary
“Web Portal” is used by insurance agents to remotely
enter data which will return to them a quote for the
insurance policy that has been customized based on their
selections in the application wizard. The returned quote
includes an estimated

premium and commission for the
agent.


The key feature in the Web application is the quote
-
generation “wizard”. It is this feature that will enable the
strategy outlined in this paper to work.


Many Websites use some sort of a “wizard”

approach
to finding a solution for their user
.
Shopping cart
applications (www.amazon.com), store locators
(
www.pizzahut.com
)
and car value
estimators
(
www.kbb.com
)
are just a few examples

of how Web
wizards drive customers to their desired information,
purchase or answer. It is this uniformity of
this
Web
design
pattern
that allows the test automation approach
described in this paper to have a more global application.



III.

Initial Solution



Recently, I completed
a

first pass thr
ough our latest
Web application. I
was
quite
proud of myself

for
successfully
leveraging a
new Ruby
-
based data
-
driven
test framework

called “Rasta”

[1]. Rasta was developed
for the open
-
source community by
Hugh Mc
Gowan and
Rafael Torres
.
The acronym
stands for “Ruby
Spreadsheet Test Automation”. I used Rasta to assist in
driving

my company’s
Web portal
application
through
the creation of an insurance claim.


Rasta is a FIT
-
like framework for developing data
-
dri
ven test cases in Excel. It allows a test developer to
extract the data needed to run various test cases from the
test code which, in the case of Rasta, is known as a test
“fixture”. The test fixture is the code which looks into
the spreadsheet in order to

determine which test methods
will be executed and
which

data will be used in those
test methods.


Test cases are
executed
based on the organization of
the spreadsheet. The execution order is from top
-
left of
the first worksheet, to the bottom
-
right of

the last
worksheet

in
the target
workbook. The individual
worksheet tab names refer to the classes you develop in
your Ruby
-
based test “fixture”. You can design your
individual test procedures any way you wish within a
horizontal or vertical construct in
Excel. This means that
your first design decision is to decide how you want to
lay
out your test cases
. I typically choose
vertical
approach
-
meaning test cases

are isolated into columns
that get executed from left to right

(
see

Figure 2
)
.







Figure 1
.

Test case Layout



This
approach
saves me from having to scroll left and
right so much whenever I am designing and testing the
framework. If you are used to writing your test
fram
e
works from a test/unit perspecti
ve (like most Watir
and test/uni
t tutorials

describe), using Rasta will force
you to think a bit differently about how you organize
your test objects in your Ruby code.
The test designer
needs
to look at the spreadsheet to
set up

the order that
test
cases will be executed
instead of the order in wh
ich
test methods are laid out in test
/unit

classes. Data is
passed from the spreadsheet into
the
test
fixture

(
which
is

simply
a collection of
Ruby
classes) via
standard Ruby
accessors
. M
ethods are invoked by Rasta in the order
they are listed on the sprea
dsheet.


Once again,
o
rganization in the spreadsheet matters because it
reflects the order in which your test methods that do the
action on the AUT will be executed. If you don’t get this
right, you will be trying to hit a “Pay” button before
logging in!



One of the neatest features about using Rasta
, and this
goes for any data
-
driven test framework that uses Excel,
is that the test developer can leverage all of the built
-
in
features of Excel to
help
simplify test case creation.

This
comes in handy when

handing the test framework off to
a non
-
developer for maintenance and additional test case
creation.



For example,
let’s say
a form in your AUT contains
several drop
-
down selection boxes. These selection
options are,
by design
, limited
by the config
uration
defined by the AUT (i.e. a user can only select “Texas”,
“Montana” or Oregon” within the “State” selection box).

When designing the test scenario in Rasta, these values
can be
linked via Excel’s built
-
in cell validation feature
so that only
valid o
ption
values can be selected as test
case attributes

in that particular cell in the
spreadsheet
.

Figure
2

shows how this “lookup” table may
be laid out.




Figure 2
. Select options


The column headers represent the “name” which is given
to the data set
listed below. This name is defined in
Excel and is associated with this set of cells
. Excel has
pretty good instructions in the built
-
in help

for how to
create
these
cell range “names”
.
Once a name

(cell
range)

is defined, the Excel cell validation feature can
then be used
in
the cells of your
test scenarios
(Figure 1)

to restrict the
data
in your test scenario to
only those
values
that are val
id in the AUT
.


Even though this approach take more up
-
front effort,
once developed, it
allows
a tester
to
simply

copy the
whole
test
scenario over to a new column
which
generate
s

a new
, unique

test case in a matter of seconds

by merely toggling one
or more of the pre
-
defined cell
options. In this way, the test developer can generate test
cases without even having access to the AUT
.
One can
put even more logic here so that the allowable values are
further restricted based on other selections made in t
hat
particular test
case
. This, too, could be done using Excel
formulas for cell validation.



IV.

Design Improvements



In the Introduction section of this paper, I described a
problem pattern that many test automators, myself
included, seem to be prone
to repeating over and over
again. That pattern is to develop an automation
framework where we
drive

our AUT with a broad use
-
case in mind. We are interested mostly in driving the app
with “no hands” instead of designing our
frameworks

to
dig deep into each

particular feature.


Lucky for me, I am privileged have the opportunity to
work with
colleagues who dwarf my test experience by
many
years
.

These are
true tester
s

at heart, whereas I
tend to get carried away by
the exercise of
test
development.

When
I presente
d my framework to
fellow tester and past author of articles in Better
Software Magazine among others,
Barton

Layne
,
he
exposed my lack of vision for testing
the AUT
and
selfish desire to automate and not test.


I knew that I had fallen into
the trap I describe above.
I was carried away with the

cool


factor and

didn’t see
the opportunity to

dig deep
” that was staring me
squarely
in the face. I’m not one to back away run away
from a challenge.
So I went back to think more on how I
could get
the test framework more “vertical”.


The first problem that quickly became apparent to me
was the way I had implemented my tests in the
spreadsheet.
In my initial design, e
ach use case
is
represent
ed by a tab (worksheet) in the spreadsheet.
Rasta look
s at the name of each worksheet to determine
in which test fixture (class) to look in order to find the
methods called in that worksheet.
I had initially decided
that with this approach, I would end up with fewer
spreadsheets (possibly even one master spre
adsheet)
with one use
-
case per tab.


I could then add variation to
each use case in the fo
rm of test cases.
Each use case
would correspond

to a single Ruby class in the fixture
files.




I knew something was wrong when my first class
ended up with 50 d
ata objects each requiring
its own

accessor and a
corresponding
definition in column two
of my spreadshee
t. It was obvious that the ten

or so test
methods required for executing this use case was trying
to do too much within a single fixture. It was diffic
ult to
read and manipulate. It also seemed fragile.


It was true,
however, that each test case did a lot of work in the
AUT
. It cranked through our entire process to create an
insurance quote. I could easily copy each column into a
new test case, change a
bit here or there with a drop
down linked to my value list
and
another path down that
use could be created.


But what if I just wanted run through five new failure
conditions at step 3? With this current architecture, this
scenario would leave much of

my test fixture un
-
used.
This isn’t a huge deal, but I wanted to be able to
more
easily identify and isolate in the fixture where I would
make
code
changes based on future changes in the AUT
.
My first approach would send me back to the massive
test fixtur
e where I would surely fat
-
finger a random
character somewhere else in the class and break the
whole thing
(see
F
igure
3
).





Figure 3. Orig
inal Design




Instead of defining each
worksheet

(and subsequently
class) in my framework as a use
-
case, I felt that
I
could
go much deeper using the same basic concept if I backed
out a bit and defined the use case at the
workbook

level
instead. This

approach would allow me to build test
fixtures that correspond to each state or step in the
process of creating my insurance quote

(see Figure 4)
. I
could then drill down at each step in the process instead
of being tethered to the overall use case. Plus,

it would
be much easier to add
detailed
test cases at this level.
In
other words, I could “go deep”. With this new design
approach
,

the

parameters for all test cases on a given
worksheet/fixture are the same which makes it easier for
me to see and debug p
roblems.







Figure 4.

Final Design



The critical flaw in my original design had to do with
me thinking, for some reason, that every
thing should be
run from a single spreadsheet. Why? Who says? As a
developer, one can sometimes get overly concerned with
avoiding repetition, associating it with duplication of
code. Since using Rasta

ties the
design to the
spreadsheets, I was thinking th
at duplicating the inputs
and outputs in separate spreadsheets, even though each
spreadsheet would be executing an entirely different use
case, would be an undesirable approach. In reality, data
-
driven tests by their very nature are designed to be
executed

in this fash
ion. My thoughts are validated i
n

the
book

“Lessons Learned in Software Testing”
[2]
, where
the authors write

“After you’ve put together a data
-
driven test procedure, you can use it again and again to
execute new tests. The technique is most
powerful with
product workflows that have lots of different data
options.” This is the case in this particular situation.
There are few flows, but lots of data options. Breaking
out these options into logical test fixtures just makes
sense to me.


My n
ext step was to break out the test fixtures into
the various states of the quote creation process so they
would correspond to my new spreadsheet layout. These
states are (for simplicity sake)
Login

-
> Enter High
-
Level Info (
Step 1
)
-
> Enter Detailed Item D
ata (
Step 2
)
-
> Rate Quote (
Step 3
) and check Return Values (
Step
4
). I simply had to break my “once through” fixture into
all o
f it
s smaller “states”.




It soon becomes apparent that by the time I reach
Step
1
, I still need all of the objects that I
created in the
Login
fixture. This is so I can execute a positive path through
Login

to get me to
Step 1
. What to do? I figured now was
as good a time as any to leverage some of the powe
r of
the object oriented (OO)

features with which, since
falling in lo
ve with Ruby, I have recently become
obsessed.


In a nutshell, I had each test fixture inherit all
of the objects from the test fixture performing the test
steps immediately before it. In this way, I was able to
isolate each step into its own class/fixture

and all
methods are
available

at each step to
:
1) g
et to the step
and 2) execute tests on that step. The nice thing is that
each fixture contains only those objects needed for
executing tests at that specific step. All other objects
called from Rasta to “
get you there” in the AUT live in
the superclass from which each fixture has inherited all
of its attributes. See
Figure 5

for an illustration of how
this
is
done.


class
TestFixture < TestFixture1


def
initialize


super


@@gui

= ApGuiMap.n
ew
# initialize gui

map


end

end


F
igure 5
.
Setup Inheritance and i
nstantiate
g
ui
m
ap



The “super” keyword is used to set up the parent/child
hierarchy
. I have not had to yet, but all classes can still
be over
-
loaded, if necessary.


Rasta includ
es the FIT
-
like
before_all
,
before_each
,

after_all

and
after_each

methods to handle the setup and teardown of each test
case. I do all of the setup and teardown in the “Login”
fixture since it is always a parent class. I question this
design decision and m
ay eventually pull this out into its
own nam
espace. Still, it has not proven
to be a problem
after comp
l
eting a few
hundred

test cases. I
also
initialize my global variables

here. While using global
variables is generally a bad programming practice, with
t
est automation, sometimes it is a necessary evil.


Since Rasta is tied to the class objects in the test
fixture, it is not possible to simply include your other test
fixtures in the top of your “child” fixtures and expect
Rasta to be able to recognize

them. It
concerned
me
a bit
when I started thinking about this because I thought
setting up this object model was completely
unnecessary

after all. I was relieved when I commented out the “<”
and everything broke
!


Another design decision that I strug
gled with was
whether or not to use a “GUI map”. I wanted to have one
place where all of my GUI objects would be defined
with their respective Watir identification data.
According
to the author of Watir, Bret Pettichord, “
Watir is an
open
-
source library fo
r automating web browsers. It
allows you to write tests that are easy to read and
maintain. It is simple and flexible
.”

[5] By keeping most
of the Watir calls in one place, I could more easily
maintain changes to the AUT.


I decided to lift another st
rategy from
one of my
colleagues
, Jim

Mathews
,

and
put the GUI map
in a
separate class (this could also be done as a module). All
buttons, fields, menus, tables, etc…are defined as
methods in this class.
This class is instantiated at the
beginning of each
child class as an instance variable
(see
Figure

5
)
.
When you need to act on these objects, simply
pass the Watir method that you want to pass on the
object

itself
.
This approach follows the “Façade” design
pattern by creating an interface class to the GUI
elements and abstracting out the actual Watir code that
defines these elements.
I used a naming convention that
identifies the objects (e.g. btn_xxx, tbl_xxx, mnu_xxx,
etc…).

This approach allows for reuse of objects that
have already been defined. For exa
mple, I was able to
reuse the “Next” button definition in each fixture that
defined a step in the Wizard process. If our development
team decides to change this to a hyper
-
link

or even
change the title of the button to “Next Step”, all that has
to be done
is change the single reference in the GUI
map.


After a while,

however,

the GUI map class does
become large. Using a good IDE like Netbeans [3] or
Eclipse [4] can make navigating your GUI map class a
breeze.

Also, one could add

r
doc comments to this G
UI
map class.



V.

Conclusion



This is just one of a
number of
ways to handle this
particular
problem. The nice thing is that I can now
easily identify where
to make my changes in the
framework whenever changes to the AUT
occur.
This
approach also makes
de
bugging my
fixtures and test
cases

much simpler.


As an added bonus, I
can also do
much more specific regression testing. If a change is
deployed in just one of these steps (and that is how they
usually come down), I know where to go to add tests or
test m
ethods.


I can also run the regression in isolation.
I
am certain there are many things I have overlooked or
taken the “long way” to achieve. But it works. And it
allows me to hand a spreadsheet to my boss with many
more


colored
” cells on it!



References


[1]

RASTA
.
http://rasta.rubyforge.org
. 2008.

[2]
Pettic
hord, Bach
, Kaner.
Lessons Learned in
Software Testing.

New York. Wiley, 2002

[3]
Welcome to Netbeans
.
http://ww
w.netbeans.org/
.
2008

[4]
Eclipse
.org Home.

http://www.eclipse.org/
. 2008

[5] WATIR.
http://wtr.rubyforge.org/
. 2008.