Development of Software Testing Ontology and Application to Test Automation

wrendeceitInternet και Εφαρμογές Web

21 Οκτ 2013 (πριν από 4 χρόνια και 18 μέρες)

74 εμφανίσεις

Development of Software
Testing Ontology and

Application to Test Automation

Prof. Hong Zhu

Department of Computing and Electronics

Oxford Brookes University

Oxford OX33 1HX, UK

Email: hzhu@brookes.ac.uk

Acknowledgement


Mr.
Yufeng

Zhang
, MSc and PhD student at the
National University of Defence Technology, China


Mr.
Qingning

Huo
, PhD student at Oxford Brookes
University, UK


Dr. Sue Greenwood
, Oxford Brookes University, UK

June, 2011

2

ONTOSE 2011, London

June, 2011

3

ONTOSE 2011, London

Context: Web Services


Web services is a distributed computing technique that
offers more flexibility and looser coupling based on the
internet and web infrastructure.


The dominant of program
-
to
-
program interactions


The components: (service providers, requesters, registry):


Autonomous
: control their own resources and their own behaviours


Active
: execution not triggered by message, and


Persistent
: computational entities that last long time


Interactions between components:


Social ability
: discover and establish interaction at runtime


Collaboration
: as opposite to control, may refuse service, follow a
complicated protocol, etc.

June, 2011

4

ONTOSE 2011, London

WS Technique Stack


Basic standards:


WSDL: service description and publication


UDDI: for service registration and retrieval


SOAP for service invocation and delivery


More advanced standards for collaborations between
service providers and requesters.


BPEL4WS:

business process and workflow models.


OWL
-
S: ontology for the description of semantics of services

Registry

Provider

Requester

Search for
services

registered
services

register
service

request service

deliver service

June, 2011

5

ONTOSE 2011, London

A Typical Scenario: Car Insurance Broker

CIB’s
Services

Bank B’s
Services

Insurance A
1
’s
Services

Insurance A
2
’s
Services

Insurance A
n
’s
Services

GUI
Interface

CIB’s service
requester

WS
Registry

End users

Other service users

Could be statically
integrated

Should be dynamically integrated for business flexibility and
competence, and lower operation and maintenance cost

Challenges to Testing WS


Testing own side services


Mostly s
imilar to test software components


Some special issues, much work reported


Testing other side’s services


Some similarity to component testing. The differences are


Lack of software artifacts


Lack of control over test executions



Lack of means of observation on system
behaviour



Testing service composition


Static composition: Mostly similar to integration test


Dynamic composition: Most challenging, because


The need to deal with diversity



The need of testing on
-
the
-
fly



The need of non
-
intrusive testing


The need of full automation















June, 2011

6

ONTOSE 2011, London

June, 2011

7

ONTOSE 2011, London

The Proposed Approach


A WS should be accompanied by a testing service


functional services
: the services of the original functionality


testing services
: the services to enable test the functional services


Testing services can be either provided


By the same vendor of the functional services


By a third party


Independent testing services:


Providers:


testing tool vendors


companies of specialized in software testing



The services:


to generate test cases


to measure test adequacy


to extract various types of diagrams from source code or design and
specification documents, etc.

June, 2011

8

ONTOSE 2011, London

Architecture

June, 2011

9

ONTOSE 2011, London

Illustration in the Typical Scenario

June, 2011

10

ONTOSE 2011, London

How Does the System Work?

The Scenario


Suppose the car insurance broker want to search for
web services of insurers and test the web service before
making quote for its customers.

Car Insurance
Broker CIB

Insurer Web
Service IS

customer

Information
about the car and
the user

Insurance
quotes

Testing the integration
of two services

June, 2011

11

ONTOSE 2011, London

Collaboration Process in the Typical Scenario

June, 2011

12

ONTOSE 2011, London

Automating Test Services


The key technique issues:


How to describe, publish and register test services at
WS registry;


How to retrieve test services automatically for
testing dynamically composed services;


How invoke test services by both a human tester and
a program;


How to report test results in the forms that are
suitable for both human beings to read and machine
to understand

These issues can be resolved by the utilization of a software
testing
ontology.

June, 2011

13

ONTOSE 2011, London

STOWS:
Software Testing Ontology for WS


STOWS
is base on an ontology of software
testing originally developed for agent oriented
software testing (Zhu &
Huo

2003, 2005).


The concepts of software testing are
represented as classes


Knowledge about software testing are
represented
as relations between concepts

June, 2011

14

ONTOSE 2011, London

Basic Concepts of Software Testing


Tester
: who carries out a testing activity.


Activity
: actions performed in testing process, e.g.


test planning, test case generation, test execution, result validation,
adequacy measurement and test report generation, etc.


Artifact
:

the entities used and/or produced by a testing activity,


Location
: expressed by a URL or a URI.


Format
: the format in which data are presented


e.g. the files, data, program code and documents etc.


Method
:
the method used to perform a test activity
.



Test methods can be classified in a number of different ways.


Context
:
the context in which a testing activity is performed, e.g.


in software development stages


to achieve various testing purposes


Environment
: The testing environment is the hardware and
software configurations in which a testing is to be performed.

June, 2011

15

ONTOSE 2011, London

Structure of Basic Concepts: Examples

Test Activity

Test planning

Test Case Generation

Test Execution

Result validation

Adequacy measurement

Report generation

Tester

Atomic
Service

Composite
Service

June, 2011

16

ONTOSE 2011, London

Compound Concepts


Capability: describes what a tester can do


the activities that a tester can perform


the context to perform the activity


the testing method used


the environment to perform the testing


the required resources (i.e. the input)


the output that the tester can generate

Capability

Activity

Method

Artefact

Capability Data

Context

Environment

June, 2011

17

ONTOSE 2011, London


Task: describes what testing service is requested


A testing activity to be performed


How the activity is to be performed:


the context


the testing method to be used


the environment in which the activity must be
carried out


the available resources


the expected outcomes

June, 2011

18

ONTOSE 2011, London

Relations Between Concepts


Relationships between concepts are a very important
part of the knowledge of software testing:


Subsumption

relation between testing methods


Compatibility between artefacts’ formats


Enhancement relation between environments


Inclusion relation between test activities


Temporal ordering between test activities


How such knowledge is used:


Instances of basic relations are stored in a knowledge
-
base as
basic facts


Used by the testing broker to search for test services through
compound relations

June, 2011

19

ONTOSE 2011, London

Compound Relations


MorePowerful

relation: between two capabilities.


MorePowerful
(
c
1
,
c
2
) means that a tester has capability
c
1

implies that the
tester can do all the tasks that can be done by a tester who has capability
c
2
.


Contains

relation: between two tasks.


Contains
(
t
1
,
t
2
) means that accomplishing task
t
1

implies accomplishing
t
2
.


Matches

relation: between a capability and a task.


Match
(
c
,
t
) means that a tester with capability
c

can fulfil the task
t
.

Capability

Tester

MorePowerful

*

*

IsMorePowerful

C
2

C
1

Task

Contains

T
1

T
2

C

T

Matches

Match

Contain

*

*

*

*

June, 2011

20

ONTOSE 2011, London

Definition of the
MorePowerful

Relation

A capability C
1

is
more powerful

than C
2
, written
MorePowerful
(C
1
, C
2
), if and only if


C
2
’s

capability is included in C
1
’s
activities


C
1

and C
2

have the same context.


Environment of C
1

is the enhancement of the
environment of C
2
.


The method of C
2

is subsumed by C
1
.


For each input artefact of C
1

, there is a corresponding
compatible input in the input artefact of C
2



For each output artefact of C
2

there is a corresponding
compatible output artefact of C
1
.

June, 2011

21

ONTOSE 2011, London

Definition of the
Contains

Relation

A task
T
1

contains

task
T
2
, written
Contains
(
T
1
,
T
2
), if and only if


T
1

and
T
2

have the same context,


T
1
’s

activities include and
T
2

s activities,


The method of
T
1

subsumes the method of
T
2
,


The environment of
T
2

is an enhancement of the
environment of
T
1
,


For each input artefact of
T
1
, there


is a corresponding
compatible the input artefact of
T
2
,


For each output artefact of
T
2

, there is a
corresponding compatible the output artefact of
T
1
.

June, 2011

22

ONTOSE 2011, London

Definition of the
Matches

Relation

A capability
C matches a task

T
, written
Matches
(
C
,
T
), if and only if


C

and
T

have the same context,


C
’s activities include
T
’s activity,


The method of
C

subsumes the method of
T
,


The environment of
T

is an enhancement of
environment of
C
,


For each input artefact of
T

, there is a corresponding
compatible input artefact of
C
,


For each output artefact of
C,
there is a
corresponding compatible the output artefact of
T
.

June, 2011

23

ONTOSE 2011, London

Properties of the Compound Relations

(1) The relations
MorePowerful

and
Contains

are
reflexive and transitive.

(2)

c
1
,
c
2

Capability
,

t

Task
,

MorePowerful
(
c
1
,
c
2
)


Matches
(
c
2
,
t
)


Matches
(
c
1
,
t
).

(3)

c

Capability
,

t
1
,
t
2

Task
,

Contains
(
t
1
,
t
2
)


Matches
(
c
,
t
1
)



Matches
(
c
,
t
2
).

June, 2011

24

ONTOSE 2011, London

Prototype Implementation


Representation of STOWS in OWL


Both basic and compound concepts are classes in
OWL and represented as XML data definition


Use STOWS in Semantic Web Services


Compound concepts represented in OWL are
transformed into OWL
-
S Service Profile for
registration
,
discovery

and
invocation


UDDI /OWL
-
S registry server: using OWL
-
S/UDDI
Matchmaker


The environment: Windows XP, Intel Core Duo CPU
2.16GHz, Jdk 1.5, Tomcat 5.5 and Mysql 5.0.

June, 2011

25

ONTOSE 2011, London

Transformation of STOWS in OWL
-
S


Activity

Context

Environment

Method

Capability data

Input Artefacts

Output Artefacts


ServiceCategory

INPUT

PARAMETERS


ContextMark


EnvironmentMark


MethodMark


Artefacts


OUTPUT

PARAMETERS


Artefacts


Capability

Service profile

June, 2011

26

ONTOSE 2011, London

Ontology Management


Motivation


All the terms used in the capability description for test service
registration, discovery and invocations must be first defined
in the ontology.


However, it is impossible to build a complete ontology of
software testing


the huge volume of software testing knowledge


the rapid development of new testing technique, methods and tools.


Therefore, the ontology must be extendable and open to
the public for updating.


To implement a framework, rather than a complete and fixed
ontology


To provide an ontology management mechanism to enable
the population of the ontology

June, 2011

27

ONTOSE 2011, London

The Ontology Management Mechanism


It provides three services to users:


AddClass:
to add new concept


DeleteClass:
to delete concept


UpdateClass:
to


revise concept of the ontology


Restrictions on the manipulation of the data model


Authority Checker:


elementary classes


form the framework of the ontology STOWS.


None of them could be pruned down


extended classes


attached to the elementary classes to define new concepts


instances of the concepts.


added by the users and can be deleted from the hierarchy


Conflict Checker



the new class to be added does not exist in the ontology


the class to be deleted has no subclasses in the hierarchy

June, 2011

28

ONTOSE 2011, London

Structure of OMS

June, 2011

29

ONTOSE 2011, London

Test Brokers


A test service that compose existing test services


Decompose test tasks into subtasks


Search for test services to carry out the subtasks


Select test services from candidates


Coordinate the selected test services


Invoke them in the right order


Pass data between them


Collects test results, etc.


Itself is a test service as well


There may be multiple test brokers owned by
different vendors

June, 2011

30

ONTOSE 2011, London

Architecture of the Prototype Test Broker

We have developed a prototype test broker to
demonstrate the feasibility of the approach.

June, 2011

31

ONTOSE 2011, London


Test
broker
process
model

A Running Example


CIQS
: the WS of the
PingAn

Insurance Company in China

Jun. 2011

32

ONTOSE 2011, London

Test

Broker


CIB: Car Insurance
Broker

TCG: Test Case
Generator

TCE: Test Case
Executor for CIQS

CIQS: Car Insurance Quote Service

Matchmaker

Request

testing

CIQS

Search

testers

Invoke

tester

Register

Case Study: Dealing with Diversity


Aim
: To evaluate the capability of dealing with diversity


Method
: To wrap a wide range of SW tools into test services

Jun. 2011

33

ONTOSE 2011, London

Name

Description

CASCAT

A CASOCC
-
based test case generation tool

Test Case Format
Translator

Translate the test case generated by CASCAT into the format recognizable
by Calculator Test Case Executor

Test Case Executor

Executes test case for a numeric calculator web service

Klee

Generate and execute test cases from C source code by symbolic
execution

Magic

Check conformance between component specifications and their
implementations

XML Comparator

Compare XML files

Java NCSS

Measure two standard metrics for Java program

Findbugs


Find bugs in Java program by static analysis

PMD

A static analysis tool for finding potential bugs and other problems in Java
source code

WSDL Based Test
Case Generator*

A WSDL based test case generation tool

Web Service Test
Case Executor
*

Execute the test case generated by WSDL Based Test Case Generator



Experiment 1: Dealing with Subtle Differences


Aim
:


To test the system’s capability of accurately
choosing an appropriate tester from those of subtle
differences


Method
:


Application of the data mutation testing technique:


Mutation operators
:

transformations of data (service
profiles in this case) (
4 types
)


Seeds
: a set of original service profiles (
11 seeds
)


Mutants:
service profiles generated from the seeds by
applying mutation operators (
167 mutants
)

Jun. 2011

34

ONTOSE 2011, London

Experiment 2: Scalability


Aim
:


To evaluate the scalability of test brokers in terms of its
efficiency to deal with test problems of practical sizes.


Problem sizes in terms of


The number of testers

in the registry


The size of the knowledge
-
base
in the test broker


The complexity of test task
requested


Method
:


To run the system for a number of times


To calculate the average lengths of execution time spent
various modules of the test broker

Jun. 2011

35

ONTOSE 2011, London

Experiment Results


The Effect of the Number of Testers


the average search time increases with the number of
testers in the registry almost linearly.

Jun. 2011

36

ONTOSE 2011, London


The Effect of the Knowledge
-
Base Size


As the size of knowledge
-
base
(in terms of the number of test
plan templates)
increases, the time spent by the task analyzer
module also increases, but in an almost linear rate.

Jun. 2011

37

ONTOSE 2011, London


The Effect of Task Complexity


The total execution time is a quadratic polynomial function of
the number of different subtasks (with
R
2
=0.9984).

Jun. 2011

38

ONTOSE 2011, London

June, 2011

39

ONTOSE 2011, London

Conclusion


The challenges imposed on testing web services can be
met by employing ontology of software testing to
collaborate test services.


Feasibility


tested by case studies with the prototype implementation


Practical usability:


Implementable without any change to the existing
standards of Semantic WS


Motivation for wider adoption by industry


Business opportunities for testing tool vendors and
software testing companies to provide testing services
online as web services


Scalable



test services are distributed and there is no extra
-
burden
on UDDI servers.

June, 2011

40

ONTOSE 2011, London

Future Work


To populate the ontology of software testing (e.g. the
formats of many different representations of testing
related artefacts)


To device the mechanism of certification and
authentication for testing services


Social challenges: For the above approach to be
practically useful, it must be adopted by web service
developers, testing tool vendors and software testing
companies


To improve the test broker, even to generalise it to all
service composition


Limitation of OWL
-
S semantic web services

June, 2011

41

ONTOSE 2011, London

Related works


Tsai
et al
. (2004): a framework to extend the function of UDDI to enable
collaboration


Check
-
in and check
-
out services to UDDI servers


A service is added to UDDI registry only if it passes a check
-
in test.


A check
-
out testing is performed every time the service is searched for. It is
recommended to a client only if it passes the check
-
out test.


To facilitate such tests, they require test scripts being included in the information
registered for the WS on UDDI.


Group testing:
further investigation of the problem how to select a service from a
large number of candidates by testing.


A test case ranking technique to improve the efficiency of group testing.


Bertolino

et al

(2005): audition framework


an admission testing when a WS is registered to UDDI


run time monitoring services on both functional and non
-
functional
behaviours

after a service is registered in a UDDI server,


Service test governance

(STG) (2009):


to incorporate testing into a wider context of quality assurance of WS


imposing a set of policies, procedures, documented standards on WS development, etc.

Bertolino and Polini admitted (2009), “
on a pure SOA
based scenario the framework is not applicable
”.

Both recognised the need of collaboration in testing WS, the technical details
about how to collaborate multiple parties in WS testing was left open.

June, 2011

42

ONTOSE 2011, London

Thank You