Object-Oriented Concepts and Principles

oatmealbrothersSoftware and s/w Development

Nov 18, 2013 (3 years and 10 months ago)

141 views

UNIT III:

Object
-
Oriented Concepts and Principles


Overview

Object
-
oriented software engineering process is similar to that found in the rapid prototyping or
spiral paradigms. Even though, object
-
oriented software engineering follows the same steps as
the conventional approach (analysis, design, implementation, and

testing) it is harder to separate
them into discrete activities.

Evolutionary Object
-
Oriented Process Model



Customer communication



Planning



Risk analysis



Engineering construction and analysis



Identify candidate classes



Look
-
up classes in library



Ext
ract classes if available



Engineer classes if not available

o

Object
-
oriented analysis (OOA)

o

Object
-
oriented design (OOD)

o

Object
-
oriented programming (OOP)

o

Object
-
oriented testing (OOT)



Put new classes in library



Construct Nth iteration of the system



Customer evaluation

Object
-
Oriented Concepts



Objects
-

encapsulates both data (attributes) and data manipulation functions (called
methods, operations, and services)



Class
-

generalized description (template or pattern) that describes a collection of simi
lar
objects



Superclass
-

a collection of objects



Subclass
-

an instance of a class



Class hierarchy
-

attributes and methods of a superclass are inherited by its subclasses



Messages
-

the means by which objects exchange information with one another



Inh
eritance
-

provides a means for allowing subclasses to reuse existing superclass data
and procedures; also provides mechanism for propagating changes



Polymorphism
-

a mechanism that allows several objects in an class hierarchy to have
different methods wi
th the same name (instances of each subclass will be free to respond
to messages by calling their own version of the method)

Advantages of Object
-
Oriented Architectures



Implementation details of data and procedures and hidden from the outside world
(reduce
s the propagation of side effects when changes are made).



Data structures and operators are merged in single entity or class (this facilitates reuse)



Interfaces among encapsulated objects are simplified (system coupling is reduced since
object needs not
be concerned about the details of internal data structures)

Class Construction Options



Build new class from scratch without using inheritance



Use inheritance to create new class from existing class contains most of the desired
attributes and operations



R
estructure the class hierarchy so that the required attributes and operations can be
inherited by the newly created class



Override some attributes or operations in an existing class and use inheritance to create a
new class with (specialized) private vers
ions of these attributes and operations.

Elements of Object Model:

In IT as in many other industries, problems of productivity have often been solved

by the creation and adoption of standards. More often than not these standards have

emerged from a ruthles
s process of natural selection: eg.,Windows or TCP/IP (the

Internet’s data transfer protocol).

In the construction of any large software system, there are two key areas to

consider: the architecture of the system (particularly for distributed systems) and

the modelling of the data, relationships and processes.


A number of basic solutions have emerged to address the architectural issues

of software development.

_ Open systems

_ Object request architectures

_ Copy based architectures

_ Standard middleware
products

_ Web delivery strategies

_ Software and architecture standards

TARMS proposes to publish an
Object Model for Finance
, as well as an associated

network data model
4
and various business process models, placing these in

Open Source
(see section
3
).
This will make the models accessible not only to

financial institutions, but also to software development companies, data providers,

consultants etc. The models will cover such areas as:

_ Deal Management

_ Instrument Modelling

_ Risk Management

_ Accounti
ng

_ Rates

_ Dates

_ Reference Data

_
Data Management


For Software Development Houses


The benefits for Software development houses would be:

_ Shorter development cycles and an easier implementation cycle. A large

body of common analysis has already been

provided and interoperable libraries

of common function will be available.

_ Support for specialization. The ability to focus on the value added parts of

their products, thus creating a trend towards specialization.

_ Reduced entry hurdles. The lower risk

profile of software purchase will

make it easier for small players to break into the business.

_ The lower risk of purchase will shorten the sales cycle.

_ An easier demonstration process. New products can be painlessly hooked

into an existing infrastruct
ure.


What Are Software Engineering Metrics?

Metrics are units of measurement. The term "metrics" is also frequently used to mean a set of
specific measurements taken on a particular item or process. Software engineering metrics are
units of measurement t
hat are used to characterize:



software engineering products, e.g., designs, source code, and test cases,



software engineering processes, e.g., the activities of analysis, designing, and coding, and



software engineering people, e.g., the efficiency of an in
dividual tester, or the
productivity of an individual designer.

If used properly, software engineering metrics can allow us to:



quantitatively define success and failure, and/or the degree of success or failure, for a
product, a process, or a person,



identify and quantify improvement, lack of improvement, or degradation in our products,
processes, and people,



make meaningful and useful managerial and technical decisions,



identify trends, and



make quantified and meaningful estimates.

Over the years, I
have noticed some common trends among software engineering metrics. Here
are some observations:



A single software engineering metric in isolation is seldom useful. However, for a
particular process, product, or person, 3 to 5 well
-
chosen metrics seems to
be a practical
upper limit, i.e., additional metrics (above 5) do not usually provide a significant return
on investment.



Although multiple metrics must be gathered, the most useful set of metrics for a given
person, process, or product may not be known ah
ead of time. This implies that, when we
first begin to study some aspect of software engineering, or a specific software project,
we will probably have to use a large (e.g., 20 to 30, or more) number of different metrics.
Later, analysis should point out t
he most useful metrics.



Metrics are almost always interrelated. Specifically, attempts to influence one metric
usually have an impact on other metrics for the same person, process, or product.



To be useful, metrics must be gathered systematically and regul
arly
--

preferably in an
automated manner.



Metrics must be correlated with reality. This correlation must take place before
meaningful decisions, based on the metrics, can be made.



Faulty analysis (statistical or otherwise) of metrics can render metrics us
eless, or even
harmful.



To make meaningful metrics
-
based comparisons, both the similarities and dissimilarities
of the people, processes, or products being compared must be known.



Those gathering metrics must be aware of the items that may influence the me
trics they
are gathering. For example, there are the "terrible H's," i.e., the Heisenberg effect and the
Hawthorne effect.



Metrics can be harmful. More properly, metrics can be misused.

Object
-
oriented software engineering metrics are units of measurement

that are used to
characterize:



object
-
oriented software engineering products, e.g., designs, source code, and test cases,



object
-
oriented software engineering processes, e.g., the activities of analysis, designing,
and coding, and



object
-
oriented software

engineering people, e.g., the efficiency of an individual tester, or
the productivity of an individual designer.

Why Are Object
-
Oriented Software Engineering Metrics Different?

OOSE metrics are different because of:



localization,



encapsulation,



informat
ion hiding,



inheritance, and



object abstraction techniques.

Localization

is the process of placing items in close physical proximity to each other:



Functional decomposition processes localize information around functions.



Data
-
driven approaches localize
information around data.



Object
-
oriented approaches localize information around objects.

In most conventional software (e.g., software created using functional decomposition),
localization is based on functionality. Therefore:



A great deal of metrics gathe
ring has traditionally focused largely on functions and
functionality



Units of software were functional in nature, thus metrics focusing on component
interrelationships emphasized functional interrelationships, e.g., module coupling.

In object
-
oriented so
ftware, however, localization is based on objects. This means:



Although we may speak of the functionality provided by an object, at least some of our
metrics identification and gathering effort (and possibly a great deal of the effort) must
recognize the "
object" as the basic unit of software.



Within systems of objects, the localization between functionality and objects is not a one
-
to
-
one relationship. For example, one function may involve several objects, and one
object may provide many functions.

Encaps
ulation

is the packaging (or binding together) of a collection of items:



Low
-
level examples of encapsulation include records and arrays.



Subprograms (e.g., procedures, functions, subroutines, and paragraphs) are mid
-
level
mechanisms for encapsulation.



In o
bject
-
oriented (and object
-
based) programming languages, there are still larger
encapsulating mechanisms, e.g., C++'s classes, Ada's packages, and Modula 3's modules.

Objects encapsulate:



knowledge of state, whether statically maintained, calculated upon
demand, or otherwise,



advertised capabilities (sometimes called operations, method interfaces, method selectors,
or method interfaces), and the corresponding algorithms used to accomplish these
capabilities (often referred to simply as methods),



[in the ca
se of composite objects] other objects,



[optionally] exceptions,



[optionally] constants, and



[Most importantly] concepts.

In many object
-
oriented programming languages, encapsulation of objects (e.g., classes and their
instances) is syntactically and
semantically supported by the language. In others, the concept of
encapsulation is supported conceptually, but not physically.

Encapsulation has two major impacts on metrics:



the basic unit will no longer be the subprogram, but rather the object, and



we wi
ll have to modify our thinking on characterizing and estimating systems.

Information hiding

is the suppression (or hiding) of details.



The general idea is that we show only that information which is necessary to accomplish
our immediate goals.



There are d
egrees of information hiding, ranging from partially restricted visibility to
total invisibility.



Encapsulation and information hiding are not the same thing, e.g., an item can be
encapsulated but may still be totally visible.

Information hiding plays a d
irect role in such metrics as object coupling and the degree of
information hiding

Inheritance

is a mechanism whereby one object acquires characteristics from one, or more,
other objects.



Some object oriented languages support only single inheritance, i.e.
, an object may
acquire characteristics directly from only one other object.



Some object
-
oriented languages support multiple inheritance, i.e. an object may acquire
characteristics directly from two, or more, different objects.



The types of characteristics

which may be inherited, and the specific semantics of
inheritance vary from language to language.

Many object
-
oriented software engineering metrics are based on inheritance, e.g.:



number of children (number of immediate specializations),



number of parent
s (number of immediate generalizations), and



class hierarchy nesting level (depth of a class in an inheritance hierarchy).

Abstraction

is a mechanism for focusing on the important (or essential) details of a concept or
item, while ignoring the inessential details.



Abstraction is a relative concept. As we move to higher levels of abstraction we ignore
more and more details, i.e., we provi
de a more general view of a concept or item. As we
move to lower levels of abstraction, we introduce more details, i.e., we provide a more
specific view of a concept or item.



There are different types of abstraction, e.g., functional, data, process, and ob
ject
abstraction.



In object abstraction, we treat objects as high
-
level entities (i.e., as black boxes).

There are three commonly used (and different) views on the definition for "class,":



A class is a pattern, template, or a blueprint for a category of s
tructurally identical items.
The items created using the class are called instances. This is often referred to as the
"class as a `cookie cutter'" view.



A class is a thing that consists of both a pattern and a mechanism for creating items based
on that pat
tern. This is the "class as an `instance factory'" view. Instances are the
individual items that are "manufactured" (created) by using the class's creation
mechanism.



A class is the set of all items created using a specific pattern, i.e., the class is the
set of all
instances of that pattern.

A
metaclass

is a class whose instances are themselves classes. Some object
-
oriented
programming languages directly support user
-
defined metaclasses. In effect, metaclasses may be
viewed as classes for classes, i.e., t
o create an instance, we supply some specific parameters to
the metaclass, and these are used to create a class. A metaclass is an abstraction of its instances.

A
parameterized class

is a class some or all of whose elements may be parameterized. New
(direc
tly usable) classes may be generated by instantiating a parameterized class with its required
parameters. Templates in C++ and generic classes in Eiffel are examples of parameterized
classes. Some people differentiate metaclasses and parameterized classes
by noting that
metaclasses (usually) have run
-
time behavior, whereas parameterized classes (usually) do not
have run
-
time behavior.

Several object
-
oriented software engineering metrics are related to the class
-
instance
relationship, e.g.:



number of instanc
es per class per application,



number or parameterized classes per application, and



ratio of parameterized classes to non
-
parameterized classes.

Case Studies of Object
-
Oriented Software Engineering Metrics


We will break our look at case studies into the
following areas:



anecdotal metrics information,



the General Electric Report,



Chidamber and Kemerer's research,



Lorenz's research, and



my own experience.

Anecdotal object
-
oriented software engineering metrics information includes:



It takes the average
software engineer about 6 months to become comfortable with
object
-
oriented technology.



The average number of lines of code per method is small, e.g., typically 1
-
3 lines of code,
and seldom more than 10 lines of code.



The learning time for Smalltalk seems

to be on the order of two months for an
experienced programmer.



Once a programmer understands a given object
-
oriented programming language, he or
she should plan on taking one day per class to (eventually) understand all the classes in
the class library.



Object
-
oriented technology yields higher productivity, e.g., fewer software engineers
accomplishing more work when compared to traditional teams.

Deborah Boehm
-
Davis and Lyle Ross conducted a study for General Electric (in 1984)
comparing several developm
ent approaches for Ada software (i.e., Structured
Analysis/Structured Design, Object
-
Oriented Design (Booch), and Jackson System
Development). They found that the object
-
oriented solutions, when compared to the other
solutions:



were simpler (using McCabe's

and Halstead's metrics)



were smaller (using lines of code as a metric)



appeared to be better suited to real
-
time applications



took less time to develop

Shyam Chidamer and Chris Kemerer have developed a small metrics suite for object
-
oriented
designs. The

six metrics they have identified are:



weighted methods per class: This focuses on the complexity and number of methods
within a class.



depth of inheritance tree: This is a measure of how many layers of inheritance make up a
given class hierarchy.



number o
f children: This is the number of immediate specializations for a given class.



coupling between object classes: This is a count of the number of other classes to which a
given class is coupled.



response for a class: This is the size of the set of methods
that can potentially be executed
in response to a message received by an object.



lack of cohesion in methods: This is a measure of the number of different methods within
a class that reference a given instance variable.

Mark Lorenz and Jeff Kidd have publ
ished the results of their object
-
oriented software
engineering metrics work. Some of the more interesting items in their empirical data include:



The ratio of key (important) classes to support classes seems to be typically 1 to 2.5, and
user interface int
ensive applications tend to have many more support classes.



The average number of person
-
days to develop a class is much higher with C++ than it is
with Smalltalk, e.g., 10 days per Smalltalk class and 20 to 30 days per C++ class.



The higher the number of
lines of code per method, the less object
-
oriented the code is.



Smalltalk applications appear to have a much lower average number of instance variables
per class when compared to C++ applications.

Edward V. Berard has worked on object
-
oriented projects si
nce 1983. Some observations from
some of the projects include:



On a very large (over 1,000,000 lines of code) object
-
oriented project, all of the source
code was run through a program that reported on the metrics for that software. Some
observations:

o

Over

90% of all the methods in all of the classes had fewer than 40 lines of code
(carriage returns).

o

Over 95% of the methods had a cyclomatic complexity of 4 or less.



On a small project (about 25,000 lines of code), staffed by 3 software engineers each
worki
ng half
-
time on the project:

o

The project was completed in six calendar months, i.e., a total of 9 software
engineering months were expended.

o

When the code was first compiled, 7 compilation errors were found, and no more
errors were found before the code wa
s delivered to the customer.


Object


oriented testing

The process of testing object
-
oriented systems begins with a review of the object
-
oriented
analysis and design models. Once the code is written object
-
oriented testing (OOT) begins by
testing "in the

small" with class testing (class operations and collaborations). As classes are
integrated to become subsystems class collaboration problems are investigated. Finally, use
-
cases from the OOA model are used to uncover software validation errors. OOT simila
r to
testing conventional software in that test cases are developed to exercise the classes, their
collaborations, and behavior. OOT differs from conventional software testing in that more
emphasis is placed assessing the completeness and consistency of th
e OOA and OOD models as
they are built. OOT tends to focus more on integration problems than on unit testing. The test
plan specification template from the SEPA web site is applicable to object
-
oriented testing as
well.

Object
-
Oriented Testing Activities



Review OOA and OOD models



Class testing after code is written



Integration testing within subsystems



Integration testing as subsystems are added to the system



Validation testing based on OOA use
-
cases


Testing OOA and OOD Models



Correctness of OOA and O
OD models



syntactic correctness judged by ensuring that proper modeling conventions and
symbolism have been used



semantic correctness based on the model's conformance to the real world problem
domain



Consistency of OOA and OOD models



assess the class
-
res
ponsibility
-
collaborator (CRC) model and object
-
relationship diagram



review system design (examine the object
-
behavior model to check mapping of system
behavior to subsystems, review concurrency and task allocation, use use
-
case scenarios to
exercise user

interface design)



test object model against the object relationship network to ensure that all design object
contain necessary attributes and operations needed to implement the collaborations
defined for each CRC card



review detailed specifications of a
lgorithms used to implement operations using
conventional inspection techniques




Assessing the Class Model

1.

Revisit the CRC model and the object
-
relationship model

2.

Inspect the description of each CRC card to determine if a delegated responsibility is part
of the collaborator's definition

3.

Invert the connection to ensure that each collaborator that is asked for service is receiving
requests from a responsible source

4.

Using the inverted connections from step 3, determine whether other classes might be
required or whether responsibilities are properly grouped among classes

5.

Determine whether widely requested responsibilities might be combined into a single
responsibility


6.

Steps 1 to 5 are applied iteratively to each class and through the evaluation of the OOA
model



Object
-
Oriented Testing Strategies



Unit testing in the OO context



smallest testable unit is the encapsulated class or object



similar to system testing of co
nventional software



do not test operations in isolation from one another



driven by class operations and state behavior, not algorithmic detail and data flow across
module interface



Integration testing in the OO context



focuses on groups of classes that c
ollaborate or communicate in some manner



integration of operations one at a time into classes is often meaningless



thread
-
based testing (testing all classes required to respond to one system input or event)



use
-
based testing (begins by testing independent classes first and the dependent classes
that make use of them)



cluster testing (groups of collaborating classes are tested for interaction errors)



regression testing is important as each thread, cluster, o
r subsystem is added to the system



Validation testing in the OO context



focuses on visible user actions and user recognizable outputs from the system



validation tests are based on the use
-
case scenarios, the object
-
behavior model, and the
event flow diagr
am created in the OOA model



conventional black
-
box testing methods can be used to drive the validation tests




Test Case Design for OO Software



Each test case should be uniquely identified and be explicitly associated with a class to be
tested



State the

purpose of each test



List the testing steps for each test including:



list of states to test for each object involved in the test



list of messages and operations to exercised as a consequence of the test



list of exceptions that may occur as the object i
s tested



list of external conditions needed to be changed for the test



supplementary information required to understand or implement the test



OO Test Design Issues



White
-
box testing methods can be applied to testing the code used to implement class
operations, but not much else



Black
-
box testing methods are appropriate for testing OO systems



Fault
-
based testing



best reserved for operations and the class level



use
s the inheritance structure



tester examines the OOA model and hypothesizes a set of plausible defects that may be
encountered in operation calls and message connections and builds appropriate test cases



misses incorrect specification and errors in subsys
tem interactions



Object
-
oriented programming brings additional testing concerns



classes may contain operations that are inherited from super classes



subclasses may contain operations that were redefined rather than inherited



all classes derived from an p
reviously tested base class need to be thoroughly tested



Scenario
-
based testing



using the user tasks described in the use
-
cases and building the test cases from the tasks
and their variants



uncovers errors that occur when any actor interacts with the OO software



concentrates on what the use does, not what the product does



you can get a higher return on your effort by spending more time on reviewing the use
-
cases as they are created, than sp
ending more time on use
-
case testing



Testing surface structure (exercising the structure observable by end
-
user, this often
involves observing and interviewing users as they manipulate system objects)



Testing deep structure (exercising internal program st
ructure
-

the dependencies,
behaviors, and communications mechanisms established as part of the system and object
design)



Class Level Testing Methods



Random testing (requires large numbers data permutations and combinations and can be
inefficient)



Parti
tion testing (reduces the number of test cases required to test a class)



state
-
based partitioning (tests designed in way so that operations that cause state changes
are tested separately from those that do not)



attribute
-
based partitioning (for each class

attribute, operations are classified according to
those that use the attribute, those that modify the attribute, and those that do not use or
modify the attribute)



category
-
based partitioning (operations are categorized according to the function they
per
form: initialization, computation, query, termination)



Inter
-
Class Test Case Design



Multiple class testing



for each client class use the list of class operators to generate random test sequences that
send messages to other server classes



for each message generated determine the collaborator class and the corresponding server
object operator



for each server class operator (invoked by a client object message) determine the message
it transmits



for each message, determine the next level of
operators that are invoked and incorporate
them into the test sequence



Tests derived from behavior models



test cases must cover all states in the state transition diagram



breadth first traversal of the state model can be used (test one transition at a ti
me and only
make use of previously tested transitions when testing a new transition)



test cases can also be derived to ensure that all behaviors for the class have been
adequately exercised