Agile Testing Tactics

heavyweightuttermostMechanics

Nov 5, 2013 (4 years and 2 days ago)

98 views

Agile Testing Tactics


for
WaterFail
,
FrAgile
,
ScrumBut
, &
KantBan

Tim Lyon

TimLyon@GameStop.com

http://www.linkedin.com/in/timlyon


Epic Fails with Application Lifecycle
Management (ALM) and QA

2

3

WaterFail

Project Example

4

FrAgile

Project Example

ScrumBut Project Example

5

KantBan Project Example

6

Learnings
: No Perfect Agile QA Process

7


Several QA steps can help address agile
development projects and risk


Automate, automate, & automate


Test planning make effective instead of extensive


Maximize test cases with probabilistic tests & data


Scheduled Session
-
based Exploratory &
Performance testing


Simple, information reporting & tracking of testing




Agile Automation Considerations

8

Why Don’t We Automate?

9

General Human
-
Error Probability Data in Various Operating Conditions

"Human Interface/Human Error", Charles P. Shelton, Carnegie Mellon University, 18
-
849b Dependable
Embedded Systems, Spring 1999, http://www.ece.cmu.edu/~koopman/des_s99/human/

Human Interface/Human Error

Description

Error
Probability

1000
LOCs

100
TCs

General rate for errors involving high stress levels

0.3

300

30

Operator fails to act correctly in the first 30 minutes
of an emergency situation

0.1

100

10

Operator fails to act correctly after the first few
hours in a high stress situation

0.03

30

3

Error in a routine operation where care is required

0.01

10

1

Error in simple routine operation

0.001

1

0.1

Selection of the wrong switch (dissimilar in shape)

0.001

1

0.1

Human
-
performance limit: single operator

0.0001

0.1

0.01

Human
-
performance limit: team of operators
performing a well designed task

0.00001

0.01

0.001

Can We Afford Not To Automate?

10

Baseline Existing Features

New Functionality

Time

Additional Functionality

Additional Functionality

Regression

Base Head Count & Effort for Coverage

QA Effort

Time

Need More QA Effort ≈ Coverage

Long term increasing
complexity with more
manual testing is not
a sustainable model

Manual Or

Automated

Time

or Overall Manual Coverage

Automated

regression tests
optimize the

Manual QA effort

Pace of feature
addition & complexity
far exceeds pace of
deprecating systems
or functionality

Need More QA Effort ≈ Coverage

Automated Test Coverage

Building Functional Automation Library

11

Sanity Check
Application is running and rendering
necessary information on key displays
Smoke Tests
Simple “Happy Path” functional tests of key
features to execute and validate
Positive Functional Tests
In
-
depth “Happy Path” functional tests of
features to execute and validate
Database Validation Tests
Non
-
GUI based data storage and
data handling validation
Alternative Functional Tests
In
-
depth alternate but “Happy Path” of less
common functional tests of features to
execute and validate
Negative Functional Tests
In
-
depth “UNHappy Path” functional tests of
features to execute and validate proper
system response and handling
System Tests
Development
Post Build
Integrated Functional Testing
Automation Layers
Used to Develop Automation
Library Over Time
Learnings: No Automation Costs

12


Start with Simple Confirming Tests and Build up

y
Work with Developers to Help Implement,
Maintain, and Run

y
Utilize Available Systems After Hours

y
Provide Time to Write, Execute, and Code Review
Automation

Agile Test Planning

13

“10 Minute Test Plan”
(in only 30 minutes)

14


Concept publicized on James Whittaker’s Blog:


http
://
googletesting.blogspot.com/2011/09/10
-
minute
-
test
-
plan.html


Intended to address issues with test plans, such as:


Difficult to keep up
-
to
-
date and become obsolete


Written ad
-
hoc, leading to holes in coverage


Disorganized
, difficult to consume all related information at
once


How about ACC Methodology?

15


Attributes
(adjectives of the system)


Qualities
that
promote the product
& distinguish
it from
competition
(e.g. "Fast", "Secure", "Stable“)


Components (nouns of the system)


Building blocks that
constitute
the system in question. (e.g.
"Database",
“API",
and
“Search“)


Capabilities (verbs of the system)


Ties a specific component to an attribute that is then
testable:


D
atabase

is
Secure
: “All credit card info is stored encrypted”


Search is Fast
: “All search queries return in less than 1
second”



Google Test Analytics

16


Google Open Source Tool for ACC Generation:


http://
code.google.com/p/test
-
analytics



Learnings
: 10 Minutes is Not Enough

17


Keep things at a
High

level


Components and attributes should
be fairly
vague


Do
NOT

start breaking up capabilities into tasks each
component has to perform


Generally
5 to 12
Components best per project


Tool is to
help

generate
coverage and
risk
-
focus
over
project
-

not necessarily each and every test case


Tool is still in its infancy


released 10/19/2011




Combinational Testing

18

What is Combinational Testing?

19


Combining all test factors to a certain level to increase
effectiveness and probability of discovering failures


Pairwise
/ Orthogonal Array Testing (OATS)/Taguchi Methods
http://www.pairwise.org/



Most errors caused by interactions of at most two factors


Efficiently yet effectively reduces test cases rather than testing
all variable combinations


Better than “
guesstimation
” for generating test cases by hand
with much less chance for errors of combination omission

Example OATS

20


Orthogonal arrays can be named like
L
Runs
(
Levels
Factors
)


Example:
L
4
(2
3
):


Website with 3 Sections (FACTORS)


Each Section has 2 States (LEVELS)


Results in 4 pair
-
wise Tests (RUNS)


FACTORS

RUNS

TOP

MIDDLE

BOTTOM

Test 1

HIDDEN

HIDDEN

HIDDEN

Test 2

HIDDEN

VISIBLE

VISIBLE

Test

3

VISIBLE

HIDDEN

VISIBLE

Test 4

VISIBLE

VISIBLE

HIDDEN

Comparison Exhaustive Tests

21

FACTORS

RUNS

TOP

MIDDLE

BOTTOM

Test 1

HIDDEN

HIDDEN

HIDDEN

Test 2

HIDDEN

VISIBLE

VISIBLE

Test

3

VISIBLE

HIDDEN

VISIBLE

Test 4

VISIBLE

VISIBLE

HIDDEN

Test

5

HIDDEN

HIDDEN

VISIBLE

Test 6

HIDDEN

VISIBLE

HIDDEN

Test 7

VISIBLE

VISIBLE

VISIBLE

Test 8

VISIBLE

HIDDEN

HIDDEN


Increases Test Cases by 100%


Can still utilize from to add “interesting” test cases

Helpful when Factors Grow

22

Tool to Help Generate Tables

23


Microsoft PICT (Freeware)


http://msdn.microsoft.com/en
-
us/library/cc150619.aspx


Web Script Interface:


http://abouttesting.blogspot.com/2011/03/pairwise
-
test
-
case
-
design
-
part
-
four.html



What is PICT Good for?

24


Mixed Strength Combinations

y
Create Parameter Hierarchy


Conditional Combinations & Exclusions


Seeding Mandatory Test Cases

y
Assigning Weights to Important Values




Learnings: Tables can be BIG!

25


Combinations and parameters to use takes
some thoughtfulness & planning


Still can generate unwieldy test cases to run


PICT statistical optimization does not always
generate full pair
-
wise combinations


Good for Test Case generation as well as basic
test data generation




Session Based / Exploratory
Testing

26

Organized Exploratory Testing

27


Exploratory testing is simultaneous learning, test
design, and test execution
-

James Bach


Session
-
Based Test Management is a more
formalized approach to it:


http://www.satisfice.com/articles/sbtm.pdf


Key Points


Have a Charter/Mission Objective for each test session


Time Box It
-

with “interesting path” extension possibilities


Record findings


Review session runs


Learnings: Explore Options

28


Make it a regular practice


Visible timer on display on desktop


Have some type of recording mechanism to replay if
possible


Change mission roles and objectives to give
different context


Test Lead needs to be actively involved in review


If implementation is too cumbersome to execute


they won’t





Tools That Can Help

29


Screenshot


Greenshot

(
http://getgreenshot.org/
)


Video Capture


CamStudio

Open Source (
http://camstudio.org/
)


Web debugging proxy


Fiddler2 (
http://www.fiddler2.com
)


Network Protocol Analyzer


WireShark

(
http://www.wireshark.org/
)

Reporting on Agile Testing

30

Use Test Case Points

31


Weighted measure of test case execution/coverage


Points provides risk/complexity of test cases to run


Attempts to help answer “We still have not run X test
cases, should we release?”


Regrade

as it moves from new feature to regression


New Feature = + 10


Cart/Checkout = + 10


Accounts = +5


Regression Item = + 1


Negative Test = + 3

Learnings: Clarify What was QA’d

32


Tell Them What Your Team Did


Traceability to Requirements


List of Known Issues (LOKI)


Regression Efforts (Automation & Manual)


Exploratory Testing, etc


Tell Them What Your Team Did NOT Do!


Regression Sets Not Run


Environmental Constraints to Efforts


Third
-
Party Dependencies or Restrictions