Unit 6

engineerbeetsΤεχνίτη Νοημοσύνη και Ρομποτική

15 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

133 εμφανίσεις


Unit 6:

Software Review; Software Testing; and Software Metrics

Chapter 15 Review Techniques

Chapter 17
SOFTWARE TESTING STRATEGIES

Chapter 18
TESTING CONVENTIONAL APPLICATIONS

Chapter 19
TESTING OBJECT
-
ORIENTED APPLICATIONS

Chapter 20
TESTING WEB
APPLICATIONS



Chapter 15 Review Techniques


Q.1 what is Review? Explain Review Techniques


What Are Reviews?



a meeting conducted by technical people for technical people



a technical assessment of a work product created during the software engineering proc
ess



a software quality assurance mechanism



a training ground


What Reviews Are Not



A project summary or progress assessment



A meeting intended solely to impart information



A mechanism for political or personal reprisal!


What Do We Look For?



Errors and
defects



Error

a quality problem found
before

the software is released to end users



Defect

a quality problem found only

after

the software has been released to end
-
users



We make this distinction because errors and defects have very different economic, busin
ess,
psychological, and human impact



However, the temporal distinction made between errors and defects in this book is
not

mainstream thinking


Defect Amplification



A
defect amplification model

[IBM81] can be used to illustrate the generation and detectio
n of
errors during the design and code generation actions of a software process.


Metrics:




The total review effort and the total number of errors discovered are defined as:



E
review

= E
p

+
E
a

+
E
r




Err
tot

= Err
minor

+ Err
major




Defect density

represents the errors found per unit of work product reviewed.



Defect density =
Err
tot

/ WPS



where …




Preparation effort, E
p

the effort (in person
-
hours) required to review a work product prior to
the actual review meeting



Assessment effort, E
a


the effo
rt (in person
-
hours) that is expending during the actual review



Rework effort, E
r


the effort (in person
-
hours) that is dedicated to the correction of those
errors uncovered during the review



Work product size, WPS

a measure of the size of the work product

that has been reviewed
(e.g., the number of UML models, or the number of document pages, or the number of lines of
code)



Minor errors found, Err
minor

the number of errors found that can be categorized as minor
(requiring less than some pre
-
specified effo
rt to correct)



Major errors found, Err
major


the number of errors found that can be categorized as major
(requiring more than some pre
-
specified effort to correct)



Q.2 gives the short note on Informal Reviews


Informal Reviews




Informal reviews include:



a simple desk check of a software engineering work product with a colleague



a casual meeting (involving more than 2 people) for the purpose of reviewing a work
product, or



the review
-
oriented aspects of pair programming



pair programming

encourages continuous review as a work product (design or code) is created.



The benefit is immediate discovery of errors and better work product quality as a
consequence.


Informal Reviews built FTR



FTR (Formal Technical Reviews)





The objectives of an

FTR are:



to uncover errors in function, logic, or implementation for any representation of the
software



to verify that the software under review meets its requirements



to ensure that the software has been represented according to predefined standards



to
achieve software that is developed in a uniform manner



to make projects more manageable



The FTR is actually a class of reviews that includes
walkthroughs

and
inspections.


Q.3 gives the short note on Review Meeting


The Review Meeting:




Between three and
five people (typically) should be involved in the review.



Advance preparation should occur but should require no more than two hours of work for each
person.



The duration of the review meeting should be less than two hours.



Focus is on a work product (e.g.
, a portion of a requirements model, a detailed component
design, source code for a component)


The Players:



Producer

the individual who has developed the work product



informs the project leader that the work product is complete and that a review is
requ
ired



Review leader

evaluates the product for readiness, generates copies of product materials, and
distributes them to two or three
reviewers
for advance preparation.



Reviewer(s)

expected to spend between one and two hours reviewing the product, making
not
es, and otherwise becoming familiar with the work.



Recorder

reviewer who records (in writing) all important issues raised during the review




Q.4 Prepare Review Options Matrix





Chapter 17
SOFTWARE TESTING STRATEGIES


Q.1 what is testing? Explain
testing strategies Approach

Software Testing:

Testing is the process of exercising a program with the specific intent of finding errors prior to delivery
to the end user.

What Testing Shows:






Testing strategies Approach:




To perform effective testing,

you should conduct effective technical reviews. By doing this, many
errors will be eliminated before testing commences.



Testing begins at the component level and works "outward" toward the integration of the entire
computer
-
based system.



Different t
esting techniques are appropriate for different software engineering approaches and
at different points in time.



Testing is conducted by the developer of the software and (for large projects) an independent
test group.



Testing and debugging are different a
ctivities, but debugging must be accommodated in any
testing strategy.


V & V:




Verification

refers to the set of tasks that ensure that software correctly implements a specific
function.



Validation

refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements. Boehm [Boe81] states this another way:



Verification:

"Are we building the product right?"



Validation:

"Are we building the rig
ht product?"

Who Tests the Software?




Testing strategies map:






Q.2 Define testing strategies issue


Testing Strategy issue





We begin by ‘testing
-
in
-
the
-
small’ and move toward ‘testing
-
in
-
the
-
large’



For conventional software



The module
(component) is our initial focus



Integration of modules follows



For OO software



our focus when “testing in the small” changes from an individual module (the
conventional view) to an OO class that encompasses attributes and operations and
implies communicat
ion and collaboration



Specify product requirements in a quantifiable manner long before testing commences.



State testing objectives explicitly.



Understand the users of the software and develop a profile for each user category.



Develop a testing plan that

emphasizes “rapid cycle testing.”



Build “robust” software that is designed to test itself



Use effective technical reviews as a filter prior to testing



Conduct technical reviews to assess the test strategy and test cases themselves.



Develop a continuous i
mprovement approach for the testing process.


Q.3 Explain Testing Strategy for conventional software


Unit Testing



Integration Testing:

Options:



the “big bang” approach



an incremental construction strategy

Top Down Integration:





Bottom
-
Up
Integration:





Sandwich Testing:



Regression Testing:




Regression testing

is the re
-
execution of some subset of tests that have already been conducted
to ensure that changes have not propagated unintended side effects



Whenever software is corrected, some aspect of the software configuration (the program, its
documentation, or the data that support it) is changed.



Regression testing helps to ensure that changes (due to testing or for other reasons) do not
introduce unin
tended behavior or additional errors.



Regression testing may be conducted manually, by re
-
executing a subset of all test cases or
using automated capture/playback tools.


Smoke Testing:



A common approach for creating “daily builds” for product software



Smo
ke testing steps:



Software components that have been translated into code are integrated into a “build.”



A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product functions.



A series of tests is designed to expose errors that will keep the build from properly
performing its function.



The intent should be to uncover “show stopper” errors that have the highest
likelihood of throwing the software project behind schedule.



The bui
ld is integrated with other builds and the entire product (in its current form) is
smoke tested daily.



The integration approach may be top down or bottom up.





Q.4 Explain Testing Strategy for Object
-
Oriented Testing

Object
-
Oriented Testing:



begins by e
valuating the correctness and consistency of the analysis and design models



testing strategy changes



the concept of the ‘unit’ broadens due to encapsulation



integration focuses on classes and their execution across a ‘thread’ or in the context of a
usage s
cenario



validation uses conventional black box methods



test case design draws on conventional methods, but also encompasses special features


Broadening the View of “Testing”:


OO Testing Strategy:



class testing is the equivalent of unit testing



operations

within the class are tested



the state behavior of the class is examined



integration applied three different strategies



thread
-
based testing

integrates the set of classes required to respond to one input or
event



use
-
based testing

integrates the set of cla
sses required to respond to one use case



cluster testing

integrates the set of classes required to demonstrate one collaboration


Testing the CRC Model:


1. Revisit the CRC model and the object
-
relationship model.

2. Inspect the description of each CRC i
ndex card to determine if a delegated responsibility is part of the
collaborator’s definition.

3. Invert the connection to ensure that each collaborator that is asked for service is receiving requests
from a reasonable source.

4. Using the inverted conne
ctions examined in step 3, determine whether other classes might be
required or whether responsibilities are properly grouped among the classes.

5. Determine whether widely requested responsibilities might be combined into a single responsibility.

6. Ste
ps 1 to 5 are applied iteratively to each class and through each evolution of the analysis model.





Chapter 18
TESTING CONVENTIONAL APPLICATIONS


Q.1 Explain Testing for Conventional Application


What is a “Good” Test?



A good test has a high probability

of finding an error



A good test is not redundant.



A good test should be “best of breed”



A good test should be neither too simple nor too complex


Internal and External Views



to ensure that "all gears mesh," that is, internal operations are performed acco
rding to
specifications and all internal components have been adequately exercised. Any engineered
product (and most other things) can be tested in one of two ways:



Knowing the specified function that a product has been designed to perform, tests can
be c
onducted that demonstrate each function is fully operational while at the same time
searching for errors in each function;



Knowing the internal workings of a product, tests can be conducted


Test Case Design:




Exhaustive Testing:


Selective Testing:


Software Testing:













1. White
-
Box Testing:






What Cover?




Basis Path Testing:



Cyclomatic Complexity:


Deriving Test Cases:




Summarizing:



Using the design or code as a foundation, draw a corresponding flow graph.



Determine the
cyclomatic complexity of the resultant flow graph.



Determine a basis set of linearly independent paths.



Prepare test cases that will force execution of each path in the basis set.


Control Structure Testing:



Condition testing


a test case design method t
hat exercises the logical conditions contained in
a program module



Data flow testing


selects test paths of a program according to the locations of definitions and
uses of variables in the program



Data Flow Testing:



The data flow testing method [Fra93]
selects test paths of a program according to the locations
of definitions and uses of variables in the program.



Assume that each statement in a program is assigned a unique statement number and
that each function does not modify its parameters or global variables. For a statement
with
S

as its statement number



DEF(
S
) = {
X

| statement
S

contains a definition of
X
}



US
E(
S
) = {
X

| statement
S

contains a use of
X
}



A
definition
-
use (DU) chain

of variable X is of the form [
X, S, S'
], where
S

and
S'
are
statement numbers,
X

is in DEF(
S
) and USE(
S'
), and the definition of
X

in statement
S

is
live at statement

S'







Loop Te
sting:


2. Black
-
Box Testing


Black
-
Box Testing:



How is functional validity tested?



How is system behavior and performance tested?



What classes of input will make good test cases?



Is the system particularly sensitive to certain input values?



How are the
boundaries of a data class isolated?



What data rates and data volume can the system tolerate?



What effect will specific combinations of data have on system operation?

Graph
-
Based Methods:


Equivalence Partitioning:






Boundary Value Analysis:




Comparison Testing:




Used only in situations in which the reliability of software is absolutely critical (e.g., human
-
rated systems)



Separate software engineering teams develop independent versions of an application
using the same specification




Each versi
on can be tested with the same test data to ensure that all provide identical
output



Then all versions are executed in parallel with real
-
time comparison of results to ensure
consistency

Orthogonal Array Testing:



Used when the number of input parameters
is small and the values that each of the parameters
may take are clearly bounded




Model
-
Based Testing:



Analyze an existing behavioral model for the software or create one.



Recall that a
behavioral model

indicates how software will respond to external e
vents or
stimuli.



Traverse the behavioral model and specify the inputs that will force the software to make the
transition from state to state.



The inputs will trigger events that will cause the transition to occur.



Review the behavioral model and note th
e expected outputs as the software makes the
transition from state to state.



Execute the test cases.



Compare actual and expected results and take corrective action as required.




Chapter 19
TESTING OBJECT
-
ORIENTED APPLICATIONS


Q.1 Explain Testing for o
bject
-
oriented system


OO Testing



To adequately test OO systems, three things must be done:



the definition of testing must be broadened to include error discovery techniques
applied to object
-
oriented analysis and design models



the strategy for unit and integration testing must change significantly, and



the design of test cases must account for the unique characteristics of OO software.


Testing’ OO Models:



The review of OO analysis and design models is especially useful because

the same semantic
constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and
code level



Therefore, a problem in the definition of class attributes that is uncovered during analysis will
circumvent side affects that mi
ght occur if the problem were not discovered until design or code
(or even the next iteration of analysis).


Correctness of OO Models:



During analysis and design, semantic correctness can be asesssed based on the model’s
conformance to the real world prob
lem domain.



If the model accurately reflects the real world (to a level of detail that is appropriate to the stage
of development at which the model is reviewed) then it is semantically correct.



To determine whether the model does, in fact, reflect real
world requirements, it should be
presented to problem domain experts who will examine the class definitions and hierarchy for
omissions and ambiguity.



Class relationships (instance connections) are evaluated to determine whether they accurately
reflect re
al
-
world object connections.


Class Model Consistency:



Revisit the CRC model and the object
-
relationship model.



Inspect the description of each CRC index card to determine if a delegated responsibility is part
of the collaborator’s definition.



Invert the
connection to ensure that each collaborator that is asked for service is receiving
requests from a reasonable source.



Using the inverted connections examined in the preceding step, determine whether other
classes might be required or whether responsibiliti
es are properly grouped among the classes.



Determine whether widely requested responsibilities might be combined into a single
responsibility.



OO Testing Strategies:



Unit testing



the concept of the unit changes



the smallest testable unit is the encapsu
lated class



a single operation can no longer be tested in isolation (the conventional view of unit
testing) but rather, as part of a class



Integration Testing



Thread
-
based testing

integrates the set of classes required to respond to one input or
event for

the system



Use
-
based testing
begins the construction of the system by testing those classes (called
independent classes
) that use very few (if any) of server classes. After the independent
classes are tested, the next layer of classes, called
dependent cl
asses



Cluster testing

[McG94] defines a cluster of collaborating classes (determined by
examining the CRC and object
-
relationship model) is exercised by designing test cases
that attempt to uncover errors in the collaborations




Validation Testing



details
of class connections disappear



draw upon use cases (Chapters 5 and 6) that are part of the requirements model



Conventional black
-
box testing methods (Chapter 18) can be used to drive validation
tests


Testing Methods:



Fault
-
based testing




The tester looks

for plausible faults (i.e., aspects of the implementation of the system
that may result in defects). To determine whether these faults exist, test cases are
designed to exercise the design or code.



Class Testing and the Class Hierarchy



Inheritance does n
ot obviate the need for thorough testing of all derived classes. In fact,
it can actually complicate the testing process.



Scenario
-
Based Test Design

Scenario
-
based testing concentrates on what the user does, not what the product does. This
means capturing
the tasks (via use
-
cases) that the user has to perform, then applying them and
their variants as tests



Chapter 20
TESTING WEB APPLICATIONS


Q.1 Explain Testing for web application


Errors in a WebApp:



Because many types of WebApp

tests uncover problems that are first evidenced on the client
side, you often see a symptom of the error, not the error itself.



Because a WebApp is implemented in a number of different configurations and within different
environments, it may be difficult
or impossible to reproduce an error outside the environment
in which the error was originally encountered.



Although some errors are the result of incorrect design or improper HTML (or other
programming language) coding, many errors can be traced to the We
bApp configuration.



Because WebApps reside within a client/server architecture, errors can be difficult to trace
across three architectural layers: the client, the server, or the network itself.



Some errors are due to the
static operating environment

(i.e
., the specific configuration in which
testing is conducted), while others are attributable to the dynamic operating environment (i.e.,
instantaneous resource loading or time
-
related errors).


WebApp Testing Strategy :



The content model for the WebApp is
reviewed to uncover errors.



The interface model is reviewed to ensure that all use
-
cases can be accommodated.



The design model for the WebApp is reviewed to uncover navigation errors.



The user interface is tested to uncover errors in presentation and/or

navigation mechanics.



Selected functional components are unit tested.



Navigation throughout the architecture is tested.



The WebApp is implemented in a variety of different environmental configurations and is tested
for compatibility with each configurat
ion.



Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within its
environment.



Performance tests are conducted.



The WebApp is tested by a controlled and monitored population of end
-
users


the results of their interacti
on with the system are evaluated for content and navigation errors, usability
concerns, compatibility concerns, and WebApp reliability and performance



The Testing Process:



Content Testing:



Content testing has three important objectives:




to uncover syntactic errors (e.g., typos, grammar mistakes) in text
-
based documents,
graphical representations, and other media




to uncover semantic errors (i.e., errors in the accuracy or completeness of information)
in any content object presented as na
vigation occurs, and

I
n
t
e
r
f
a
c
e

d
e
s
i
g
n
A
e
s
t
h
e
t
i
c

d
e
s
i
g
n
C
o
n
t
e
n
t

d
e
s
i
g
n
N
a
v
i
g
a
t
i
o
n

d
e
s
i
g
n
A
r
c
h
i
t
e
c
t
u
r
e

d
e
s
i
g
n
C
o
m
p
o
n
e
n
t

d
e
s
i
g
n
u
s
e
r
t
e
c
h
n
o
l
o
g
y
C
o
n
t
e
n
t

T
e
s
t
i
n
g
I
n
t
e
r
f
a
c
e

T
e
s
t
i
n
g
C
o
m
p
o
n
e
n
t

T
e
s
t
i
n
g
N
a
v
i
g
a
t
i
o
n

T
e
s
t
i
n
g
P
e
r
f
o
r
m
a
n
c
e

T
e
s
t
i
n
g
C
o
n
f
i
g
u
r
a
t
i
o
n

T
e
s
t
i
n
g
S
e
c
u
r
i
t
y

T
e
s
t
i
n
g



to find errors in the organization or structure of content that is presented to the end
-
user.


Database Testing:



User Interface Testing:



Interface features are tested to ensure that design rules, aesthetics, and related visual con
tent is
available for the user without error.



Individual interface mechanisms are tested in a manner that is analogous to unit testing.



Each interface mechanism is tested within the context of a use
-
case or NSU for a specific user
category.



The complete in
terface is tested against selected use
-
cases and NSUs to uncover errors in the
semantics of the interface.



The interface is tested within a variety of environments (e.g., browsers) to ensure that it will be
compatible.



Testing Interface Mechanisms :




Links

navigation mechanisms that link the user to some other content object or function.



Forms

a structured document containing blank fields that are filled in by the user. The data
contained in the fields are used as input to one or more WebApp functions.



Client
-
side scripting

a list of programmed commands in a scripting language (e.g., Javascript)
that handle information input via forms or other user interactions



Dynamic HTML

leads to content objects that are manipulated on the client side using scripting

or cascading style sheets (CSS).

Client
-
side pop
-
up windows

small windows that pop
-
up without user interaction. These windows can
be content
-
oriented and may require some form of user interaction




CGI scripts

a common gateway interface (CGI) script implem
ents a standard method that
allows a Web server to interact dynamically with users (e.g., a WebApp that contains forms may
use a CGI script to process the data contained in the form once it is submitted by the user).



Streaming content

rather than waiting f
or a request from the client
-
side, content objects are
downloaded automatically from the server side. This approach is sometimes called “push”
technology because the server pushes data to the client.



Cookies

a block of data sent by the server and stored by

a browser as a consequence of a
specific user interaction. The content of the data is WebApp
-
specific (e.g., user identification
data or a list of items that have been selected for purchase by the user).

Application specific interface mechanisms

include o
ne or more “macro” interface mechanisms such as
a shopping cart, credit card processing, or a shipping cost calculator



Usability Tests:



Design by WebE team … executed by end
-
users



Testing sequence …



Define a set of usability testing categories and identi
fy goals for each.



Design tests that will enable each goal to be evaluated.



Select participants who will conduct the tests.



Instrument participants’ interaction with the WebApp while testing is conducted.



Develop a mechanism for assessing the usability of the WebApp



different levels of abstraction:




the usability of a specific interface mechanism (e.g., a form) can be assessed



the usability of a complete Web page (encompassing interface mechanisms, data
o
bjects and related functions) can be evaluated


Compatibility Testing:




Compatibility testing is to define a set of “commonly encountered” client side computing
configurations and their variants



Create a tree structure identifying



each computing platform



typical display devices



the operating systems supported on the platform



the browsers available



likely Internet connection speeds



similar information.



Derive a series of compatibility validation tests



derived from existing interface tests, navigation tests
, performance tests, and security
tests.



intent of these tests is to uncover errors or execution problems that can be traced to
configuration differences.


Component
-
Level Testing:



Focuses on a set of tests that attempt to uncover errors in WebApp

functions



Conventional black
-
box and white
-
box test case design methods can be used



Database testing is often an integral part of the component
-
testing regime


Navigation Testing:



The following navigation mechanisms should be tested:



Navigation links

thes
e mechanisms include internal links within the WebApp, external
links to other WebApps, and anchors within a specific Web page.



Redirects

these links come into play when a user requests a non
-
existent URL or
selects a link whose destination has been remov
ed or whose name has changed.



Bookmarks

although bookmarks are a browser function, the WebApp should be tested
to ensure that a meaningful page title can be extracted as the bookmark is created.



Frames and framesets

tested for correct content, proper layo
ut and sizing, download
performance, and browser compatibility



Site maps

Each site map entry should be tested to ensure that the link takes the user
to the proper content or functionality.



Internal search engines

Search engine testing validates the accurac
y and completeness
of the search, the error
-
handling properties of the search engine, and advanced search
features


Navigation Testing Semantics:



Is the NSU achieved in its entirety without error?



Is every navigation node (defined for a NSU) reachable with
in the context of the navigation
paths defined for the NSU?



If the NSU can be achieved using more than one navigation path, has every relevant path been
tested?



If guidance is provided by the user interface to assist in navigation, are directions correct a
nd
understandable as navigation proceeds?



Is there a mechanism (other than the browser ‘back’ arrow) for returning to the preceding
navigation node and to the beginning of the navigation path.



Do mechanisms for navigation within a large navigation node
(i.e., a long web page) work
properly?



If a function is to be executed at a node and the user chooses not to provide input, can the
remainder of the NSU be completed?


Configuration Testing:



Server
-
side



Is the WebApp fully compatible with the server OS?



Ar
e system files, directories, and related system data created correctly when the
WebApp is operational?



Do system security measures (e.g., firewalls or encryption) allow the WebApp to
execute and service users without interference or performance degradation
?



Has the WebApp been tested with the distributed server configuration (if one exists)
that has been chosen?



Is the WebApp properly integrated with database software? Is the WebApp sensitive to
different versions of database software?



Do server
-
side WebApp

scripts execute properly?



Have system administrator errors been examined for their affect on WebApp
operations?

If proxy servers are used, have differences in their configuration been addressed with on
-
site testing?




Client
-
side



Hardware

CPU, memory,
storage and printing devices



Operating systems

Linux, Macintosh OS, Microsoft Windows, a mobile
-
based OS



Browser software

Internet Explorer, Mozilla/Netscape, Opera, Safari, and others



User interface components

Active X, Java applets and others



Plug
-
ins

QuickTime, RealPlayer, and many others



Connectivity

cable, DSL, regular modem, T1



The number of configuration variables must be reduced to a manageable number


Security Testing:




Designed to probe vulnerabilities of the client
-
side environment, the networ
k communications
that occur as data are passed from client to server and back again, and the server
-
side
environment



On the client
-
side, vulnerabilities can often be traced to pre
-
existing bugs in browsers, e
-
mail
programs, or communication software.



On th
e server
-
side, vulnerabilities include denial
-
of
-
service attacks and malicious scripts that can
be passed along to the client
-
side or used to disable server operations


Performance Testing:



Does the server response time degrade to a point where it is notic
eable and unacceptable?



At what point (in terms of users, transactions or data loading) does performance become
unacceptable?



What system components are responsible for performance degradation?



What is the average response time for users under a variety of

loading conditions?



Does performance degradation have an impact on system security?



Is WebApp reliability or accuracy affected as the load on the system grows?



What happens when loads that are greater than maximum server capacity are applied?


Load
Testing:



The intent is to determine how the WebApp and its server
-
side environment will respond to
various loading conditions



N,

the number of concurrent users



T,

the number of on
-
line transactions per unit of time



D,

the data load processed by the server
per transaction



Overall throughput,
P,

is computed in the following manner:



P = N

x

T

x

D


Stress Testing:



Does the system degrade ‘gently’ or does the server shut down as capacity is exceeded?



Does server software generate “server not available” messages
? More generally, are users
aware that they cannot reach the server?



Does the server queue requests for resources and empty the queue once capacity demands
diminish?



Are transactions lost as capacity is exceeded?



Is data integrity affected as capacity is

exceeded?



What values of
N, T,
and

D
force the server environment to fail? How does failure manifest
itself? Are automated notifications sent to technical support staff at the server site?



If the system does fail, how long will it take to come back
on
-
line?



Are certain WebApp functions (e.g., compute intensive functionality, data streaming
capabilities) discontinued as capacity reaches the 80 or 90 percent level?




Unit 7:

Product Metrics; and Software Project Estimation

Chapter 23 Product Metrics

C
hapter 26
Estimation for Software Projects

Chapter 27 Project Scheduling



Chapter 23 Product Metrics


Q.1 what is Metrics? Explain metrics for web Application


The IEEE glossary defines a

metric

as “a quantitative measure of the degree to which a system,

component, or process possesses a given attribute


Design Metrics for WebApps:



Does the user interface promote usability?



Are the aesthetics of the WebApp appropriate for the application domain and pleasing to the
user?



Is the content designed in a manner

that imparts the most information with the least effort?



Is navigation efficient and straightforward?



Has the WebApp architecture been designed to accommodate the special goals and objectives
of WebApp

users, the structure of content and functionality, and the flow of navigation required
to use the system effectively?



Are components designed in a manner that reduces procedural complexity and enhances the
correctness, reliability and performance?


Q.2 Ex
plain Function
-
Based Metrics


Function
-
Based Metrics:




The
function point metric

(FP), first proposed by Albrecht [ALB79], can be used effectively as a
means for measuring the functionality delivered by a system.



Function points are derived using an empiri
cal relationship based on countable (direct)
measures of software's information domain and assessments of software complexity



Information domain values are defined in the following manner:



number of external inputs (EIs)



number of external outputs (EOs)



nu
mber of external inquiries (EQs)



number of internal logical files (ILFs)



Number of external interface files (EIFs)





Q.3 Explain Metrics for OO Design




Whitmire [Whi97] describes nine distinct and measurable characteristics of an OO design:



Size



Size
is defined in terms of four views: population, volume, length, and
functionality



Complexity



How classes of an OO design are interrelated to one another



Coupling



The physical connections between elements of the OO design



Sufficiency



“the degree to which
an abstraction possesses the features required of it, or the
degree to which a design component possesses features in its abstraction, from
the point of view of the current application.”



Completeness



An indirect implication about the degree to which the a
bstraction or design
component can be reused



Cohesion



The degree to which all operations working together to achieve a single, well
-
defined purpose



Primitiveness



Applied to both operations and classes, the degree to which an operation is
atomic



Similari
ty



The degree to which two or more classes are similar in terms of their structure,
function, behavior, or purpose



Volatility



Measures the likelihood that a change will occur



Q.4

Explain Metrics for Testing




Testing effort can also be estimated using
metrics derived from Halstead measures



Binder [Bin94] suggests a broad array of design metrics that have a direct influence on the
“testability” of an OO system.



Lack of cohesion in methods (LCOM).



Percent public and protected (PAP).



Public access to da
ta members (PAD).



Number of root classes (NOR).



Fan
-
in (FIN).



Number of children (NOC) and depth of the inheritance tree (DIT).


Q.5

Define following terms (1) Measures (2) Metrics and (3) Indicators




A
measure

provides a quantitative indication of
the extent, amount, dimension, capacity, or size
of some attribute of a product or process



The IEEE glossary defines a

metric

as “a quantitative measure of the degree to which a system,
component, or process possesses a given attribute.”



An
indicator

is a
metric or combination of metrics that provide insight into the software process,
a software project, or the product itself


Details of Measurement Process



Formulation.

The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.



Collection.

The mechanism used to accumulate data required to derive the formulated metrics.



Analysis.
The computation of metrics
and the application of mathematical tools.



Interpretation.

The evaluation of metrics results in an effort to gain insight into the quality of the
representation.



Feedback.

Recommendations derived from the interpretation of product

metrics transmitted to
th
e software team.


Metrics Attributes:



Simple and computable.

It should be relatively easy to learn how to derive the metric, and its
computation should not demand inordinate effort or time



Empirically and intuitively persuasive.

The metric should satisfy t
he engineer’s intuitive notions
about the product attribute under consideration



Consistent and objective.

The metric should always yield results that are unambiguous.



Consistent in its use of units and dimensions.

The mathematical computation of the metr
ic
should use measures that do not lead to bizarre combinations of unit.



Programming language independent.

Metrics should be based on the analysis model, the
design model, or the structure of the program itself.



Effective mechanism for quality feedback.

That is, the metric should provide a software engineer
with information that can lead to a higher quality end product


Chapter 26
Estimation for Software Projects


Q.1 What Software Project Planning? Explain Project Planning Task

Software Project Planning




Project Planning Task:




Establish project scope



Determine feasibility



Analyze risks




Risk analysis is considered in detail in Chapter 25.



Define required resources



Determine require human resources



Define reusable software resources



Identify
environmental resources




Estimate cost and effort



Decompose the problem



Develop two or more estimates using size, function points, process tasks or use
-
cases



Reconcile the estimates



Develop a project schedule



Scheduling is considered in detail in Chapter 2
7.



Establish a meaningful task set



Define a task network



Use scheduling tools to develop a timeline chart



Define schedule tracking mechanisms


Q.2 What Software Estimation? Explain Project Estimation in Detail

Estimation:



Estimation of resources, cost, and schedule for a software engineering effort requires



experience



access to good historical information (metrics)



the courage to commit to quantitative predictions when qualitative information is all
that exists



Estimation
carries inherent risk and this risk leads to uncertainty



Scope of Estimation:




Software scope

describes



the functions and features that are to be delivered to end
-
users



the data that are input and output



the “content” that is presented to users as a
consequence of using the software



the performance, constraints, interfaces, and reliability that
bound

the system.



Scope is defined using one of two techniques:



A narrative description of software scope is developed after communication
with all stakeholde
rs.



A set of use
-
cases is developed by end
-
users.






Understand the customers needs



understand the business context



understand the project boundaries



understand the customer’s motivation



understand the likely paths for change



understand that ...



Project

Estimation:





Project Estimation Techniques:




Past (similar) project experience



Conventional estimation techniques




task breakdown and effort estimates




size (e.g., FP) estimates



Empirical models



Automated tools


1. Functional Decomposition








2.
Conventional Techniques:




compute LOC/FP using estimates of information domain values



use historical data to build estimates for the project








Example: LOC Approach:




Example: FP Approach:


Process
-
Based Estimation:




Q.3 Explain COCOMO
-
II
Models



COCOMO Stands for Constructive Cost Model



COCOMO II is actually a hierarchy of estimation models that address the following areas:



Application composition model.
Used during the early stages of software
engineering, when prototyping of user
interfaces, consideration of software and
system interaction, assessment of performance, and evaluation of technology
maturity are paramount.



Early design stage model.

Used once requirements have been stabilized and
basic software architecture has been est
ablished.



Post
-
architecture
-
stage model.

Used during the construction of the software.


The Software Equation:

A dynamic multivariable model



E = [LOC x B
0.333
/P]
3

x (1/t
4
)



where


E = effort in person
-
months or person
-
years


t = project duration in mo
nths or years


B = “special skills factor”


P = “productivity parameter”


Q.4 Explain Estimation for OO Projects




Develop estimates using effort decomposition, FP analysis, and any other method that is
applicable for conventional applications.



Using object
-
oriented requirements modeling (Chapter 6), develop use
-
cases and determine a
count.



From the analysis model, determine the number of key classes (called analysis classes in Chapter
6).



Categorize the type of interface for the application and develop a m
ultiplier for support classes:



Interface type



Multiplier



No GUI






2.0



Text
-
based user interface 2.25



GUI




2.5



Complex GUI




3.0




Multiply the number of key classes (step 3) by the multiplier to obtain an
estimate for the
number of support classes.



Multiply the total number of classes (key + support) by the average number of work
-
units per
class. Lorenz and Kidd suggest 15 to 20 person
-
days per class.



Cross check the class
-
based estimate by multiplying the
average number of work
-
units per use
-
case






Q.5 short note on Estimation for Agile Projects




Each user scenario (a mini
-
use
-
case) is considered separately for estimation purposes.



The scenario is decomposed into the set of software engineering tasks tha
t will be required to
develop it.



Each task is estimated separately. Note: estimation can be based on historical data, an empirical
model, or “experience.”



Alternatively, the ‘volume’ of the scenario can be estimated in LOC, FP or some other
volume
-
oriented measure (e.g., use
-
case count).



Estimates for each task are summed to create an estimate for the scenario.



Alternatively, the volume estimate for the scenario is translated into effort using
historical data.

The effort estimates for all sc
enarios that are to be implemented for a given software increment are
summed to develop the effort estimate for the increment


The Make
-
Buy Decision




Computing Expected Cost:







Chapter 27 Project Scheduling


Q.1 Give the short note on Project Sche
duling


Why Are Projects Late?



an unrealistic deadline established by someone outside the software development group



changing customer requirements that are not reflected in schedule changes;



an honest underestimate of the amount of effort and/or the
number of resources that will be
required to do the job;



predictable and/or unpredictable risks that were not considered when the project commenced;



technical difficulties that could not have been foreseen in advance;



human difficulties that could not have

been foreseen in advance;



miscommunication among project staff that results in delays;



a failure by project management to recognize that the project is falling behind schedule and a
lack of action to correct the problem


Scheduling Principles:



compartmen
talization

define distinct tasks



interdependency

indicate task interrelationship



effort validation

be sure resources are available



defined responsibilities

people must be assigned



defined outcomes

each task must have an output



defined milestones

review fo
r quality





Effort Allocation:



1. Defining Task Sets:



determine type of project



assess the degree of rigor required



identify adaptation criteria



select appropriate software engineering tasks




1.1 Define a Task Network:



1.2 Timeline Charts



Q.2 Give the short note Earned Value Analysis (EVA)




Earned value



is a measure of progress



enables us to assess the “percent of completeness” of a project using quantitative
analysis rather than rely on a gut feeling




“provides accurate and reliable readin
gs of performance from as early as 15 percent
into the project.” [Fle98]

Computing Earned Value
-
I




The
budgeted cost of work scheduled

(BCWS) is determined for each work task represented in
the schedule.




BCWS
i

is the effort planned for work task
i.




To determine progress at a given point along the project schedule, the value of BCWS is
the sum of the BCWS
i

values for all work tasks that should have been completed by that
point in time on the project schedule.



The BCWS values for all work tasks are su
mmed to derive the
budget at completion,

BAC. Hence,




BAC = ∑ (BCWS
k
) for all tasks
k





Next, the value for
budgeted cost of work performed

(BCWP) is computed.



The value for BCWP is the sum of the BCWS values for all work tasks that have actually

been completed by a point in time on the project schedule.



“the distinction between the BCWS and the BCWP is that the former represents the budget of
the activities that were planned to be completed and the latter represents the budget of the
activities t
hat actually were completed.” [Wil99]



Given values for BCWS, BAC, and BCWP, important progress indicators can be computed:



Schedule performance index, SPI = BCWP/BCWS



Schedule variance, SV = BCWP


BCWS



SPI is an indication of the efficiency with which
the project is utilizing scheduled
resources.







End