Testing & Software Quality

fanaticalpumaMechanics

Nov 5, 2013 (3 years and 10 months ago)

59 views

Testing & Software Quality

Seminar on software quality

13.5.2005

Karipekka Kaunisto

Contents


Role of testing in quality assurance


Challenges of software testing


What is test automation?


Test automation: Possible benefits


Common pitfalls of test automation


Conclusions


References

Role of testing in quality assurance


Quality controlling


Final product meets it’s requirements


Find potential errors and flaws


Enforce standards and good design
principles


Regression testing


Improving quality


Preventive testing


Find cause of an error not just symtoms

Role of testing...


Testing as supportive action


Data collected during testing can be used
to develop various quality metrics


These can be used to some extend when
evaluating system quality and maturity


However, numbers alone don’t solely
assure good quality!

Examples of poor testing


A major U.S. retailer was hit with a large govermental
fine in October of 2003 due to web site errors that
enabled customers to view one anothers' online orders


In early 1999 a major computer game company recalled
all copies of a popular new product due to software
problems. The company made a public apology for
releasing a product before it was ready


A retail store chain filed suit in August of 1997 against a
transaction processing system vendor (not a credit card
company) due to the software's inability to handle credit
cards with year 2000 expiration dates

Challenges of software testing


Complexity of testing


Even in a seemingly simple program
there can be potentially infinite number
of possible input and output
permutations to test


What about large software systems with
complex control logic and numerous
dependencies on other modules and
entire systems?

Complexity of testing


=> It is not feasible to get even close to
testing all combinations and thus finding all
possible errors!


Tester needs to carefully create test set in
a way that minimises risk of fatal errors in
final product


Related problem: How do you know when
to stop testing?


Acceptable risk level


Managing large test sets


Various general techniques have been
introduced for managing test sets


Partitioning to smaller subsets


Testing special cases (boundaries,
special values etc.)


Testing most important functions only
(focus testing)


Invalid inputs and data


Program flow and code coverage testing

Are we ready to ship?


Even with all the techniques available it will
require tester’s personal expertise and
domain knowledge to create test plan and
make the final decision to approve the
product


Business issues may also affect on this:
Risk of errors vs. risk of delay


Plan and test effort correlate quite well to
quality controlling role of testing


Other challenges


Testing activities require significant amount
of time and resources of the project =>
Delays, hasty testing


Testing is often regarded as dull,
monotonous and laborous part of software
development => Poor effort


System architecture is often quite complex,
which require special testing effort =>
Reliability suffers, all tests not even
possible manually

What is test Automation?


”The management and performance of
test activities to include the
development and execution of test
scripts so as to verify test requirements,
using an automated test tool”.


Dustin,
Rashka & Paul


”Testing supported by software tool”.


Faught, Bach

Automation in practice


Tester describes the test cases for tool by
using special scripting language designed
by tool developers


Some tools may also include graphical
interface and recording options but in
practise scripting has to be used


Script should also specify how tool is
supposed to interpret the correct results of
any given test case


Tool then takes care of executing specified
tests and examining the results

Automation in practise (cont.)


Result validation includes text outputs,
elapsed time, screen captures etc.


Can be very challenging part to do
automatically and may require some
human intervention in some cases!


Evaluation results are presented in clear
test reports that can be used to examine
results of test round


Produced reports can also be used to
gather data for various quality metrics

Areas of test automation


Automation suits mainly on testing
that requires repeated effort of
similar tests cases


Regression testing


Portability testing


Performance and stress testing


Configuration testing


Smoke testing


...


Possible benefits


More reliable system


Improved requirements definition


Improved performance (load & stress)
testing


Better co
-
operation with developers


Quality metrics & Test optimisation


Enchanced system development life cycle

Benefits (2)


More effective testing process


Improved effort in various sub
-
areas like
regression, smoke, configuration and
multi
-
platform compatibility testing


Ability to reproduce errors


Dull routine tests can be executed
without human intervention


”After
-
hours testing”

More effective...


Execution of tests that are not possible
manually


Better focus on more advanced testing
issues


Enchanced business expertise

Benefits (3)


Reduced test effort and schedule


Initial costs of automation are usually
very high


Payback comes later on (possibly quite
much later) when team has adopted the
process and use of tools

Pitfalls of test automation


Automatic test planning and design


There are no tools that can generate
these automatically!


Requires human expertise and domain
knowledge


Tool just does what it is scripted to do
and nothing else

Pitfalls (2)


Immeadiate cost and time savings


On the contrary introduction of
automation and new tools will increase
the need of resources!


Automation process must be planned,
test architecture created, tools
evaluated, people trained, scripts
programmed...


= Lot’s of work

Immediate...


Potential savings will be archieved
(much) later on when organisation has
’learned’ the process and created needed
infrastructure for it


If automation is introduced poorly,
savings will never be gained at all!


In the worst case automation can just
mess things up

Pitfalls (3)


One tool does it all


Wide array of operating systems,
hardware and programming languages


Very different systems and architectures
are often used


Testing requirements differ depending on
system and project


Result analysis differ (graphical, text,
time etc.)

Pitfalls (4)


Automation requires no technical skill


Tools rely solely on scripts when
executing tests


Maintainable and reusable script building
requires good programming skills and
knowledge of the tool


Testers may have to be able to use
several different tools with different
scripting technologies!

Pitfalls (5)


100% test automation


Even if automation succeeds it cannot
completely replace manual testing


Some tests must be conducted manually
and others require at least some human
intervention


Automation is really useful only with test
cases that are executed repeatedly over
time (regression)

Other related tools


Code analyzers


Coverage analyzers


Memory analyzers (purifiers)


Web test tools


Test management tools

Conclusions


Testing has significant role in software
quality assurance


Automation, when implemented properly
can further improve test effort and thus
lead to improved quality


However many automation attempts have
failed because of unrealistic expectations
and inproper introduction of automation
tools

References


Dustin E., Rashka J., Paul J.: Automated
Software Testing: Introduction,
Management and Performance. Adison
Wesley, 1999


Craig R. and Jaskiel S.: Systematic
Software Testing, Artech House Publishing,
2002


Pettichord Bret, Presentations and
Publications.
http://www.io.com/~wazmo/papers/