Journal of Network and Computer Applications

inexpensivedetailedNetworking and Communications

Oct 23, 2013 (3 years and 5 months ago)


CORM:A reference model for future computer networks
Hoda Hassan
,Mohamed Eltoweissy
Computer Science and Engineering Department,American University in Cairo,Egypt
Cyber Systems Security,Pacific Northwest National Laboratory,Arlington,VA,USA
a r t i c l e i n f o
Article history:
Received 13 April 2011
Received in revised form
21 August 2011
Accepted 20 October 2011
Available online 25 November 2011
Complex adaptive systems
Computer network architecture
Computer network reference model
Protocol design
a b s t r a c t
This paper acknowledges the need for revolutionary designs to devise the Future Internet by presenting
a clean-slate Concern-Oriented Reference Model (CORM) for architecting future networks.CORM is
derived in accordance to the Function–Behavior–Structure engineering framework,conceiving compu-
ter networks as a distributed software-dependent complex system.CORMnetworks are designed along
two main dimensions:a vertical dimension addressing structure and configuration of network building
blocks;and a horizontal dimension addressing communication and interaction among the previously
formulated building blocks.For each network dimension,CORMfactors the design space into function,
structure,and behavior,applying to each the principle of separation of concerns for further systematic
decomposition.Perceiving the network as a complex system,CORMconstructs the network recursively
in a bottom–up approach starting by the network building block,whose structure and behavior are
inspired by an evolutionary bacterium cell.Hence,CORM is bio-inspired,it refutes the long-endorsed
concept of layering,it accounts intrinsically for emergent behavior fostering network integrity and
stability.We conjecture that networks designed according to CORM-based architectures can adapt and
further evolve to better fit their contexts.To justify our conjecture,we derive and simulate a CORM-
based architecture for ad hoc networks.
& 2011 Elsevier Ltd.All rights reserved.
Current research in computer networks is at a critical turning
point.The research community is endeavoring to devise future
network architectures that address the deficiencies identified in
present network realizations,acknowledge the need for a trust-
worthy IT infrastructure,and satisfy the society’s emerging and
future requirements (Clark,2010).Considering the lessons from
the past and evaluating the outcomes and contributions of
contemporary research literature,the community concluded that
the advent of a trustworthy future Internet cannot be achieved by
the current trajectory of incremental changes and point solutions
to the current Internet,but rather more revolutionary paths need
to be explored (Clark,2010;Feldmann,2007).Proposed architec-
tures need to be grounded on well-articulated design principles
that account for network operational and management complex-
ities,embrace technology and application heterogeneity,regulate
network inherent emergent behavior,and overcome shortcom-
ings attributed to present network realizations.
Present network realizations are the outcome of incremental
research efforts and endeavors exerted during the inchoative stage
of computer network design.Back then,the aim was to inter-
connect architecturally disparate networks into one global net-
work.Such inter-network connection was achieved through the
introduction of the Transmission Control Protocol (TCP) (Cerf and
Kahn,1974).TCP was introduced as a flexible protocol allowing for
inter-process communication across networks,while hiding under-
lying network differences.TCP was later split into TCP and IP
leading to the derivation of the layered Internet TCP/IP suite.As
such,the TCP/IP suite defined the Internet system,which was
regarded as a vehicle to interconnect diverse types of networks.
However,the astounding success of the TCP/IP suite in intercon-
necting networks resulted in adopting the TCP/IP suite as the de
facto standard for inter computer communication within a single
network,as well as across multiple networks.An initiative that
diminished the need for independent research efforts addressing
the requirements and specifications for internally designing com-
puter networks.Focusing primarily on interconnection,TCP/IP
networks possessed intelligence at the network edges,while
regarding the network core as a ‘‘dump forwarding machine,’’ thus
introducing the end-to-end (E2E) design principle,a fundamental
principle for TCP/IP networks (Saltzer et al.,1984).Influenced by
TCP/IP-layered architecture and the E2E design principle,network
designers and protocol engineers conformed to a top–down design
Contents lists available at SciVerse ScienceDirect
Journal of Network and Computer Applications
1084-8045/$- see front matter & 2011 Elsevier Ltd.All rights reserved.
Corresponding author.
E-mail (H.Hassan), (M.Eltoweissy).
Journal of Network and Computer Applications 35 (2012) 668–680
strategy as the approach to architect networks.Moreover,with the
introduction of the layered OSI model,the top–down layered
approach in network design and protocol engineering was empha-
sized further,in spite of the fact that the OSI was developed as an
‘‘Interconnection Architecture,’’ architecture facilitating the
interaction of heterogeneous computer networks rather than an
architecture for building computer networks (Zimmermann,1980).
Despite the outstanding success of its realizations,we argue
that the Internet-layered model was deficient in representing
essential network aspects necessary for network design and
subsequent protocol engineering.First,the traditional ‘‘cloud
model’’ derived from the E2E principle abstracts the Internet
layout as core and edge,thus it fails to express the network
topological,social,and economical boundaries.Second,resource
management as a function is absent from the Internet-layered
model.Consequently,network designers and engineers introduced
point-solutions to handle resource-management functions such as
admission control,traffic engineering,and quality of service.Third,
the Internet-layered model prohibits vertical function integration,
thus it hinders the efficient engineering of performance aspects
that need to span across layers.A requirement that was accen-
tuated when operating in wireless environments,and resulted in
numerous proposals for cross-layer designs.Finally,the Internet-
layered model does not express network behavior nor allowfor its
customization according to context and requirements.A defi-
ciency that led to two undesirable effects.First,IP-based networks
exhibit a defining characteristic of unstable complex systems—a
local event can have a destructive global impact (Greenberg et al.,
2005).Second,lack of support for mobility,security,resilience,
survivability,etc.,which are considered main features for a
pervasive trustworthy infrastructure.
Taking a broader viewon the community’s architectural quest,
we present a clean-slate Concern-Oriented Reference Model
(CORM) for architecting future computer networks.CORM has
sprouted as a generalization to our concepts,design principles
and methodology presented in Hassan et al.(2008a,b,2009).Our
initial endeavor,CellNet (Hassan et al.,2009),was a bio-inspired
network architecture,which was tailored to operate in accor-
dance to the TCP/IP suite.However,CORMis a reference model for
architecting any computer network.It expresses the most funda-
mental design principles for engineering computer networks at
the highest level of abstraction.CORM stands as a guiding
framework from which several network architectures can be
derived according to specific functional,contextual,and opera-
tional requirements or constraints.CORMrepresents a pioneering
attempt within this realm,and to our knowledge,CORM is the
first reference model that is bio-inspired,accounts for complex
system characteristics and is derived in accordance with the
Function–Behavior–Structure (FBS) engineering framework
(Gero,1990).CORM conceives computer networks as a distrib-
uted software-dependent complex system that needs to be
designed along two main dimensions:a vertical dimension
addressing structure and configuration of network building
blocks;and a horizontal dimension addressing communication
and interactions among the previously formulated building
blocks.For each network dimension,CORM factors the design
space into function,structure and behavior,applying to each the
principle of separation of concerns (SoC) for further systematic
decomposition.Perceiving the network as a complex system,
CORM constructs the network recursively in a bottom–up
approach starting by the network building block,which we refer
to as the network cell (NC).The NC’s structure and behavior
mimic evolutionary bacterium cell (Ben-Jacob and Levine,2005).
The network is then synthesized from NCs according to a
structural template that defines different topological boundaries.
By constructing the network as a complex system,CORM refutes
the long endorsed concept of layering,intrinsically accounts for
emergent behavior,and ensures system integrity and stability.
System integrity means having,or being conceived of having,a
unified overall design,form,or structure (Software Architecture,
2002).In CORM-based networks,system integrity stems from
network component congruency,as all network components are
defined in terms of the NC.On the other hand,stability refers to
the ability of the system to maintain stable global patterns in
spite of the unpredictable interactions occurring at the local level,
where elements composing the systemoperate at conditions that
are far from equilibrium (Rihani).In CORM-based networks,
stability stems from the emergent behavior of the NC.
Being a reference model for computer networks,CORMcan be
considered a definitional model.A definitional model is more
typical of conventional engineering—it expresses required char-
acteristics of a system at an appropriate level of abstraction
(Polack et al.,2008).CORM expresses the characteristics of
complex adaptable systems (CAS),
as well as network function-
alities within its basic abstraction unit (CORM-NC),and enforces
both to be synthesized into the network fabric by construction.
Therefore,to validate CORM,we resorted to validate the deriva-
tion of the CORM-NC using the Function–Behavior–Structure
(FBS) framework (Gero,1990).The FBS is applicable to any
engineering discipline for reasoning about,and explaining the
nature and process of design (Kruchten,2005).Furthermore,we
derive,and evaluate the behavior of an architecture derived from
CORM for ad hoc networks through simulation.
Our contributions in this research work are:(1) Formulation of
core design principles that,we opine,are applicable to all
computer networks regardless of their size,purpose,operational
context,or capabilities;(2) developing a new bio-inspired net-
work reference model,and introducing a novel network abstrac-
tion unit (NC),which intrinsically accommodates behavioral
modifications and evolution;(3) validating of our reference model
using the FBS engineering framework;(4) and the derivation,as
well as simulation,of a CORM-based network architecture for ad
hoc networks.
The rest of the paper is organized as follows:Section 2
presents the design axioms that have guided this research and
argues for our adopted bottom–up design approach;Section 3
presents CORM;in Section 4,we validate CORM using the FBS
engineering framework;in Section 4.2,we derive and simulate a
CORM-based architecture;and Section 5 concludes the paper
proposing different areas for future work.
2.CORM design principles and methodology
In crafting CORM design principles,we sought to address
computer network design at the highest level of abstraction.On
the one hand,distributed computer networks,such as the Internet,
stand as a typical example of complex systems (Mitchell,2006).
On the other hand,the operations performed by computer net-
works rely fundamentally on protocols,which can be described as
distributed software running on different nodes constituting the
network.Therefore,computer networks are a typical example of a
distributed software-dependent systems.These two observations
In the context of this research work the terms bottom–up and top–down
refer to network composability as opposed to their more frequent use to refer to
layer organization in the Internet layered architecture.
We will be using the terms complex systems and complex adaptive systems
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680 669
led to our two ground design principles and design methodology
to be detailed in the following subsections.
2.1.Principle I:a computer network is a complex system
Complex systems can be defined as a ‘‘large network of
relatively simple components with no central control in which
emergent complex behavior is exhibited as a result of component
interactions’’,thus increasing the systems’ success in fitting their
contexts (Mitchell,2006).A complex adaptive system (CAS) is
characterized by the following properties,which computer net-
work design need to account for (Mitchell,2006):
(1) Complex systems are composed of relatively large number of
autonomous entities.This property implies the distributed
control and structure of complex systems,as well as their
indeterminate nature.Entities in a complex system exist
independent of each other (no implied global structure or
imposed hierarchy).Yet,act interdependently,affecting and
getting affected by each other;
(2) Complex systems exhibit complexity.Quantitatively,com-
plexity refers to the amount of information required to depict
the system at the micro and macro levels (Mitchell,2006).
Complexity arises,not only from the myriad intricate inter-
actions occurring at the micro level among the system
components,but more notably,form the feedback and influ-
ence of the macro level resultant behavior on decisions taken
at the micro level.In other words,the mapping from indivi-
dual actions to collective behavior is non-trivial giving the
system its discernible emergent behavior property;and
(3) Complex systems exhibit emergent behavior.Emergent
behavior refers to the ability of the system’s components to
change/evolve their structures and/or functions without exter-
nal intervention as a result of their interactions,and/or in
response to changes in their environment (Mitchell,2006).
Emergent behavior results in global level system stability,in
spite of possible local level disequilibrium.Therefore,complex
systems are frequently described as being at the ‘‘edge
of chaos’’ (Mogul,2006).A metaphor used to describe the
system’s reaction to minor context/environmental changes,by
shifting fromone state of order to another,to better cope with
the induced changes.Emergent behavior can be further classi-
fied as self-organization,adaptation,or evolution (Zhang et al.,

Self-organization refers to changes in a component’s indi-
vidual behavior due to inter component communication.

Adaptation refers to changes in components’ behavior in
response to changes in the surrounding environment.Both
self-organization and adaptation imply information propaga-
tion and adaptive processing through feedback mechanisms.

Evolution refers to a higher form of intelligent adaptation
and/or self-organization of components in response to
changes by accounting on previously recorded knowledge
from past experiences.Evolution usually implies the pre-
sence of memory elements as well as monitoring functions
in evolvable components.
Comparable to CAS,present network realizations are charac-
terized by complexity as well as composability of autonomous
entities (whether in the physical sense referring to computing
devices,or in the logical sense referring to running processes).
Yet,in contrast to CAS,present-network realizations are char-
acterized by emergent ill behavior,where a local event may have
a destructive global effect realized as cascading meltdowns
that might require human intervention for correct network
operation (Greenberg et al.,2005;Mogul,2006).Furthermore,
up to our knowledge,adaptation in current network protocols is
crude (e.g.,TCP aggressive cutdown on congestion window size
regardless of the reason for packet drops).Current network
protocols lack the evolvability capability intrinsic to all CAS,
which render them oblivious to changing patterns and trends in
operating conditions and performance requirements.Hence,we
assert that for a computer network to exhibit CAS emergent
behavior,adaptability,self-organization,and evolvability need to
be intrinsic capabilities of network protocols.
2.2.Principle II:a computer network is a distributed software
From a distributed software system perspective,the design of
computer networks needs to rely on software engineering (SE)
concepts and principles.To manage software functional complex-
ity,SE defines the concept of Separation of Concerns (SoC),which
is a general problem-solving technique that addresses complexity
by cutting down the problem space into smaller,more manage-
able,loosely-coupled,and easier to solve sub-problems.Thus SoC
allows for better problem understanding and design (Mili et al.,
2004).SoC is a dual facet concept addressing concerns from two
different views.The first viewidentifies concerns according to the
systemrequirement specifications.It classifies concerns into core
concerns and crosscutting concerns.Core concerns refer to core
functional requirements of a system that can be identified with
clear cutting boundaries.Hence,core concerns can be represented
in separable modules or components resulting in loosely coupled
systems.On the other hand,crosscutting concerns are nonfunc-
tional aspects of the system that span over multiple modules
and/or components,to manage and optimize the core concerns.If
not carefully handled,cross concerns result in scattering and
tangling of systembehavior (Mili et al.,2004).The second viewof
the SoC concept addresses system configuration.It differentiates
between system components and system connectors (Gacemi1
et al.,2004).Systemcomponents represents loci of computations,
decisions and states,while system connectors,represents com-
ponent interactions that facilitates information flow and state
2.3.Design methodology:concern-oriented bottom–up design
Our proposed concern-oriented bottom–up design methodol-
ogy follows directly from our design principles.The bottom–up
approach is derived fromproperty 1 in Principle I and the second
view of SoC concept in Principle II.Both accentuated the sig-
nificance of the basic building block composing the network
system.The concern-oriented paradigm represents our vision in
network functional and structural decomposition.
In adopting a bottom–up approach,our main focus will be the
vertical network dimension,thus delineating the responsibilities
and capabilities of the network building block.From a CAS
perspective,these network building blocks need to possess
adaptability,self-organization,and evolvability as intrinsic fea-
tures.The network will then be constructed by composition from
these building blocks.From a SE perspective,network building
blocks will represent the loci of computations,decisions,and
states encompassing all network concerns (core as well as cross-
cutting concerns),while their interactions and communications
identify network connectors thus instantiating the network hor-
izontal dimensions.
Our concern-oriented bottom–up design methodology does
not differentiate between network core and network edge in
terms of capabilities.Thus it contradicts the E2E principle that has
been central to the Internet design.The E2E principle dictates that
‘‘mechanisms should not be placed in the network if it can be
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680670
placed at the end nodes,and that the core of the network should
provide a general service,not one that is tailored to a specific
application’’ (Saltzer et al.,1984).It has been argued that the E2E
principle has served the Internet well by keeping the core general
enough to support a wide range of applications.However,we
contend that,taken as an absolute rule,the E2E principle
constrained core evolvability rather than fostered its capabilities,
rendering the Internet biased to those applications that can
tolerate its oblivious nature and forcing designers and protocol
engineers to adopt point solutions to compensate for core
deficiencies (Hassan et al.,2008a,b).Another consequence to
our proposed bottom–up network composition is the contra-
diction of the prevailing misconception in abstracting a network
in terms of an inter-network.Adopting a bottom–up approach
to network composition implies the recursive construction of
inter-networks from networks,which are likewise recursively
constructed from network components,which are constructed
from one or more network building blocks.
3.CORM:a concern-oriented reference model for computer
As a reference model,CORMstands as an abstract representa-
tion of the network.It conveys a minimal set of unifying concepts,
axioms,and relationships to be realized within the network
(Reference Model for Service Oriented Architecture 1.0,2006).
CORM abstracts the network in terms of function,structure,and
behavior,represented as the concern-oriented conceptual frame-
work (ACRF),the network structural template (NST),and the
information flowmodel (IFM),respectively.Focusing primarily on
the vertical dimension of the network,our research detailed the
first two components of CORM.As for the horizontal network
dimension,we will provide a synopsis of the IFM.
3.1.ACRF:the conceptual framework for network concerns
The conceptual framework was derived according to our
interpretation of the network requirement specifications.We
postulate that the basic requirement for computer networks can
be expressed by the following statement:The network is a
communication vehicle allowing its users to communicate using the
available communication media.Accordingly,we identify the net-
work users,the communication media,and the communication
logic as the primary requirements for any network manifestation.
Applying the concept of SoC to our analysis,we identify four main
network concerns;application concern (ACn),communication
concern (CCn),resource concern (RCn),and federation concern
(FCn).The first three are core network concerns encompassing the
network functional requirements,while the fourth is a cross-
cutting concern (non-functional requirement) representing the
area of intersection among core concerns.Elaborating on each
concern we have:
(1) The ACn encompasses the network usage semantics,i.e.the
logic and motivation for building the network,where differ-
ent network-based end applications (network users) can be
(2) The CCn addresses the need for network route binding to
provide an end-to-end communication path allowing network
elements to get connected (communication logic).
(3) The RCn focuses on network resources,whether physical or
logical,highlighting the need for resource management to
efficiently achieve different optimization versus performance
trade-offs for creating and maintaining network resources
(communication media).
(4) Finally,FCn orchestrates interactions resolving conflicts,and
managing cross-interests,where areas of overlap exists
among all previously mentioned concerns.
These four network concerns are manifested as CORM’s con-
ceptual framework for network concerns,referred to hereafter as
ACRF.In CORM networks,ACRF concern classification will be
applied throughout the network,starting from the network
building blocks up to the network systemas a whole.At the most
elementary level,the network-building block’s responsibilities
and capabilities will be classified according to ACRF.While at the
network systemlevel,global-network behavior will map to ACRF.
Thus ACRF represents the network genetic code to be realized
along both network dimensions,vertically on the network com-
ponents,and horizontally among network components.
Analyzing the Internet model (vertical dimension) and the
current network realizations (horizontal dimension) with respect
to the ACRF framework,we note that both RCn and FCn are
absent.Vertically,the Internet-layered model accounts for ACn
and CCn.However,the model did not apply the correct concern
separation;a single concern was split along two layers as shown
in Fig.1.Moreover,the strict layered paradigm for functional
decomposition curtailed all possibilities for considering FCn.As
for RCn,it was assumed that resource-management functional-
ities,are either applications of specific type,and thus will be
overlaid on top of the protocol stack,or are to be handled locally
by the physical media (Zimmermann,1980).For the horizontal
dimension,current network realizations account for both ACn and
CCn,while the RCn and FCn are usually realized as point
solutions.Servers and server farms represent ACn,while routers,
switches,and DNS represent CCn.Both RCn and FCn,are imple-
mented as add on functionalities conducted by the use of special
protocols for network management and traffic engineering.
3.2.NST:the network structural template
CORM networks are composed of active network components
that communicate through a passive communication substrate.
CORM’s network-components are composed of network-building
blocks,which act as the loci of computations and decisions,while
the communication substrate represents the interaction media
where information exchange takes place.Being the primary consti-
tuents of a software-dependent CAS,CORM’s network-building
blocks need to possess adaptability,self-organization,and evolva-
bility as intrinsic features thus mimicking bacterial cells in a bacterial
colony—our adapted model of CAS (Ben-Jacob and Levine,2005).
Accordingly,we define the CORM Network Cell (NC) as a self-
contained computational/decision entity capable of monitoring its
state,adapting to perceived conditions,inferring decisions,recording
its experience,and eventually evolving through self-learning and
intelligent adaptations.One or more NCs make up a CORMnetwork
Fig.1.ACRF mapping to internet layered architecture.
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680 671
component (Ncomp),which we define as the basic network entity
capable of end-to-end communication.
3.2.1.NC:the network cell
The NC’s structure and behavior is inspired by observations
recorded in a recent study on primordial bacteria that provided a
vivid representation of the main features that need to be present
in entities composing a self-engineering CAS (Ben-Jacob and
Levine,2005).According to this study,the following four cap-
abilities are essential for a NC to possess emergent behavior:
(1) The NC should be aware of its surrounding environment as
well as of its inner states.
(2) The NC should be able to reason about its perceived states and
decide on a course of action that best serves its goal function.
(3) The NC should be able to memorize previously inferred
decisions and learn from past experiences.
(4) The NC should be able to communicate and cooperate with
other cells to achieve the high-level goals of the whole system.
To beget these capabilities in CORM,the NC is constructed out
of four units:the interface unit (IU),the monitoring unit (MU),the
regulatory unit (RU) and,the execution unit (EU) as shown in
Fig.2.Figure 2 represents the structure of a generic NC.By generic
we mean that this structure will be common to all NCs regardless
of their assigned responsibilities or roles.The NC has two modes
of concurrent operations:intrinsic operation and functional
operation.The intrinsic operation is common to all NCs and
represents an NC’s genetic blueprint.It is regarded as the
sequence of actions and rules that the NC must obey throughout
its lifetime.On the other hand,the functional operation of an NC
is assigned to it on its creation,and prescribes the role that the NC
must perform.This includes the function assigned,and instruc-
tion set to be executed by all units.Once the NC is assigned a
functional role,it turns to be a specialized NC.Therefore,the
generic NC is just a template out of which all specialized NCs can
be derived.It is also possible for a specialized NC to change its
function during its lifetime or alternate between different func-
tions,depending on its assigned role(s).Once an NC materializes
by assuming a functional role,it will be assigned a portion of the
system resources according to its needs.Following is an outline
for the intrinsic operation of each of the units shown in Fig.2.
(1) IU:the IU is the NC boundary allowing it to communicate
with the outside world (environment or peers).Through the
IU,the NC receives and transmits different forms/representa-
tions of data (states,instructions,control,content,etc.).
(2) MU:the MU is responsible for monitoring the different states
of the NC.This includes monitoring all input and/or output
flows directed into or out of the NC,all internal communica-
tions among the NC units,as well as the assigned recourses.
The MU will extract state information and represent this
information in a quantifiable format.These quantified states
are then stored in the state database to be retrieved upon
requests received from the RU (see next).
(3) RU:the RU has two regulatory cycles one is inherent,and the
other is initiated.The inherent cycle is always in operation and
checks that the resource usage levels and performance para-
meters are always within the set thresholds.The initiated cycle
is either triggered by requests received from the EU asking for
advice and recommendations for performance enhancement,or
due to performance deterioration realized through monitoring.
Accordingly,the RU inspects environmental parameters to infer
the reasons that accounted for performance deterioration.This
step may lead to communication with neighboring peers
requesting their views of the environment.An RU has the
capability of gaining knowledge and learning from past experi-
ences,which it records in the knowledge/experience database.
Therefore,the RU provides educated recommendations to EU to
optimize its operation.In addition,the RU may update the
threshold levels according to its inferred decisions.
(4) EU:the EU is responsible for executing the role(s)/responsi-
bilities assigned to the NC.Role(s)/responsibilities(s) assigned
to the NC are usually accompanied by a pool of libraries that
can be used to formulate different ways of accomplishing the
required task(s).The EUis composed of two main components;
the logic component (LC) and the resource use controller
component (RUCC).The LC is responsible for performing the
NC’s task(s).An LC starts by creating a set of instructions that
best accomplishes the required task(s) using the code library
pool.It also requests RU’s recommendations,thus incorporat-
ing both the RU’s knowledge and experience,as well as
accounting for environmental alterations.Once an LC receives
feedback or recommendations fromthe RU it might update its
tailored instructions to fit the inferred operational status.The
RUCC,on the other hand,is responsible for managing resources
assigned to an NC.The RUCC works together with the LC to
ensure optimal internal resource usage and distribution.The
RUCC is also responsible for estimating the required level of
resources for the NC’s operation as a whole,and thus requests
the estimated resources from the system.
The NC units communicate through the cell communication
substratum that represents the media into which information is
temporarily deposited to be later consumed.The role(s)/responsi-
bilities assigned to a specialized NC will further identify the
operational aspects of each of the NC units such as the parameters
to be monitored and states recorded by an MU,the threshold
values to be used by an RU,and the code library pool out of which
the logic is tailored and executed by EU.Recalling that ACRF
represents the network functional blueprint to be realized
throughout the network,and that the NC represents the basic
building block to encompass the network-concern space for CORM
networks,we note that any role/responsibility assigned to the NC
will be decomposed according the ACRF as illustrated in Fig.3.
3.2.2.Network compositional logic
In CORM,the Network Compositional Logic (NCL) defines the
bottom–up network construction out of network entities,and
identifies the different interaction boundaries that can occur
Fig.2.CORM NC.
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680672
among network entities.Network entities can be classified,as
mentioned before,into computational/decision building-blocks,
and a communication substrate.The computational/decision
building-blocks (NC and/or Ncomp) produce and/or consume
network data,while the communication substrate is a passive
media,where network data can flow.The NCL stems from our
bottom–up definition of the network and inter-network construc-
tion.NCL defines a computer network as two or more Ncomps
connected by a communication substratum,where Ncomp interac-
tions are sustained despite the heterogeneity of the hardware,
middleware,and software of the connected Ncomps.While an
inter-network is defined as two or more networks connected by
communication substrate,where interactions among Ncomps resid-
ing within each of the connected networks are sustained,despite the
heterogeneity of the hardware,middleware,and software employed
by the Ncomp composing the connected networks,as well as the
heterogeneity of the communication substrate connecting the net-
works.Thus in an inter-network,the communicating Ncomps
perceive the connected networks as a single network.Integrating
NC,Ncomp,and NCL,we derive CORM NST defined as follows,
using EBNF:
CORM NST EBNF formal Definition:
Trailing * means repeat 0 or more times
Trailingþmeans repeat 1 or more times
MU Monitoring Unit
RU Regulation Unit
EU Execution Unit
IU Interface Unit
NC Network Cell
CCS Cell Communication Substratum
Ncomp Network Component
Net Network
NCS Network Communication Substratum
INet Inter-network
Grammar Definitions
Ncomp NC (CCS NC)*
Net Ncomp (NCS Ncomp)þ
INet Net (NCS Net)þ¼Ncomp (NCS Net NCS)þ Ncomp
3.3.IFM:the information flow model
The Information Flow model (IFM) represents the horizontal
dimension of the network allowing for interactions among network
entities,giving rise to the emergent behavior required for network
adaptation and evolution.The IFMcaptures the aspects of informa-
tion exchange by defining two sub-models:Data Representation
sub-model (DR) and Data Communication sub-model (DC).Data
representation and communication in CORM exist at both the
vertical and horizontal network dimensions.Vertically,data repre-
sentation and communication occur within an NC between the
NC-units,and between different NCs making up a Ncomp.Horizon-
tally,data representation and communication occur between
Ncomps in the same network or across networks.The DR will
provide categorization for the different types of information flowing
in the systemabiding by the ACRF framework.Therefore,the DR is
mainly concerned with the ‘‘meaning’’ of information flowing
within the network system.DR need to handle complexity in terms
of the amount of information required to depict the systemstates at
the macro and micro level,taking decisions on the details that need
to be exposed and those that need to be suppressed.The DC,on the
other hand,is concerned with communication aspects including
interface compatibilities,data formatting across different commu-
nication boundaries and majorly routing functions including
addressing,naming and forwarding.Similar to the DR,the DC will
need to address characteristics of complex systems,such as the free
scale small world layout (Mitchell,2006),when devising the
routing functions.The DC represents a major part of the CCn to
be realized by CORMNCs and Ncomps.
The IFM will also define the structure of the network-data
exchange unit (NDEU) in CORM.The NDEU stands as an equivalent
to network-data packets in the Internet model.As a preliminary
visualization,the NDEU will be composed of three main fields;a
CCn field,an RCn field and an ACn field.The CCn field will identify
the destination of the NDEU.By destination,we mean,the NC-unit
within the NC residing on the Ncomp that should receive the NDEU.
The RCn field will identify how the NDEU should be handled at the
network level and at the NC level.At the network level,the RCn
field will dictate the transmission requirements for the NDEU in
terms of network-resources usage.This may include the NDEU
priority,fragmentation,FEC used,link characteristics,etc.At the NC
level,the RCn field will specify RCn requirements for the data held
in the ACn field of the NDEU.Collectively the CCn field and the RCn
field will identify the network-data flowthat the NDEU will belong
to.The ACn field will hold the data to be exchanged among the
NC-units.We stress that the above presentation for the NDEU fields
is at a preliminary stage,and the NDEU structure will be refined as
we further detail the DC and DR in our future work.
4.Validating CORM
Most contemporary network architectural proposals address a
particular network problemor are tailored to satisfy a specific set of
requirements.Therefore,their validation have been done,to a great
extent,through descriptive approaches including simulation,proto-
typing or mathematical modeling.However,in case of CORM,being a
reference model for computer networks,CORMcan be considered a
definitional model.A definitional model is more typical of conven-
tional engineering—it expresses required characteristics of a system
at an appropriate level of abstraction (Polack et al.,2008).CORM
expresses the characteristics of CAS as well as network specification
requirements within its basic abstraction unit (BAU);the CORM-NC,
and enforces both to be synthesized into the network fabric by
construction.Therefore,to validate CORM,we resorted to validate
the derivation of the CORM-NC using the Function-Behavior-Struc-
ture (FBS) framework (Greenberg et al.,2005;Kruchten,2005).
Furthermore,to evaluate the behavior of CORM-based networks,
we derived a CORM-based network architecture for ad hoc networks,
and simulated an ad hoc network based on our derived architecture
using the ns-2 simulator.
Fig.3.ACRF realization within CORM-NC.
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680 673
4.1.Validating CORM using the FBS engineering framework
The FBS framework developed in Greenberg et al.(2005),and
illustrated in Fig.4 is applicable to any engineering discipline,for
reasoning about,and explaining the nature of the design process
(Kruchten,2005).It clearly delineates the design process in terms
of eight steps (Greenberg et al.,2005;Kruchten,2005).The
inception point for the CORM-NC design is marked by our design
principles.According to which,computer networks need to be
designed as a software-dependent CAS that exhibit emergent
behavior.CORM design principles formed our first set of require-
ments F1 and expected behavior Be1 as follows;
F1¼CAS (autonomous entities,complexity)
Be1¼Emergent Behavior (adaptation,self-organization,
Shifting to the structure that can deliver F1 and Be1,we
attempted a catalog lookup by exploring natural complex sys-
tems,and studying their structures (S),and the individual
behavior of their components (Bs).Our research led us to a recent
study on primordial bacterial colonies (Ben-Jacob and Levine,
2005).This point marked our first functional reformulation.We
formulated newrequirements F2 for designing a network cell that
mimics the bacterium cell behavior Be2.Accordingly,we synthe-
sized the structure S2 from Be2 presenting the NC.However,F2,
Be2,and S2 needed further reformulation to detail network
requirements.At this point,we defined the network requirement
specification that led to the derivation of the ACRF framework for
network concerns,yielding a new set of requirements F3.F3 was
integrated with F2,and super-imposed over Be2,and S2 to
customize each towards the context of computer networks lead-
ing to the derivation of CORM-NC.
F2¼Self-engineering NC Be2-S2
Be2¼Bacterium cell behavior F3¼ACn,CCn,RCn,FCn
CORM-NC delineates the BAU from which the network can be
recursively built.However,at this point Bs and Be are not
completely defined for the CORM-NC,since this will involve
defining performance variables and their range of values for the
software code that will be running within each of the NC-units.
This can be attributed to the fact that CORM,as a reference model,
is at least three levels of abstraction away from any physical
realization.Its purpose is to provide a common conceptual
framework that can be used consistently across different imple-
mentations (Reference Model for Service Oriented Architecture
1.0,2006).Hence,architectures derived from CORM,and repre-
sented using the CORM-NC,will need to define Be and Bs within
the context of their application.
4.2.Deriving a CORM-based architecture
The key difference between a reference model and an archi-
tectural model is the level of concept abstraction that the model
conveys,as well as the degree of requirement specifications that
the model addresses.CORM expresses the most fundamental
design principles for engineering computer networks at the
highest level of abstraction.To derive an architecture fromCORM,
further specifications regarding network operational context,
performance requirements,and/or constraints need to be identi-
fied.In this section we will identify the requirement specifica-
tions for ad hoc networks and derive a CORM-based architecture
for ad hoc networks (CAHN) based on our proposed specifications.
To evaluate the performance of our derived architecture,we will
identify the required behavior of CAHN-based networks in two
case studies and illustrate their operation using simulation.The
first case study calls for network stability by adapting the rate of
different data flows carried by the network to available link
capacities.The second case study calls for cross interest manage-
ment among operating protocols.
4.2.1.CAHN:a CORM-based architecture for ad hoc networks
To derive CHAN,we start by nominating the requirement
specifications for architecting ad hoc networks,which we propose
as follows:
(1) Minimal architecture that provides core network functional-
ities:at the minimum level,CAHN should be able to provide
basic communication and transport services equivalent to
that supported by the TCP/IP suite.
(2) Modular:CAHN abstractions should separate functions into
modules with clear defined interfaces.
(3) Dual node-functionalities:in CHAN each node acts as a router
as well as an end node.
(4) Acknowledge the operational requirements of wireless
transmission in terms of transmission interference and MAC
Based on CORMNST and ACRF,and guided by requirements 1,
2,and 3,we define CAHN-Ncomp to be composed of three CORM-
NCs,instantiating the core concerns defined by the ACRF.Accord-
ingly,CAHN abstractions are the following concern-specialized
CORM-NCs;Application Network Cell (ANC),Communication
Network Cell (CNC),and Resource Network Cell (RNC).CAHN-
networks will be composed of CAHN-Ncomp,which is composed
of one or more of the concern-specialized NC’s;ANC,CNC,and
RNC.Using previously defined notations and abbreviations,
CAHN’s structure can be represented in EBNF as


4.2.2.Engineering protocols for CAHN
Protocol engineering in CAHN need to be classified according
to the ACRF framework,and thus executed by the corresponding
concern-specialized NC.Moreover,the task performed by each
Fig.4.Gero’s FBS (adapted from Greenberg et al.(2005)).
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680674
protocol or concern-specialized NC will be internally classified
according to the ACRF,as defined by the CORM-NC.To clarify this
recursive assignment of the ACRF framework,we present an
example for the routing function in CAHN.
According to the ACRF classification,the routing function is a
CCn,which will be represented as a CNC in CAHN.However,
routing as a function is a composite task,that can be further
divided into several subtasks such as,naming,addressing,for-
warding,routing table creation and maintenance,etc.These
identified subtasks will be recursively classified according to the
ACRF.Following is an example of such classification:

CNC-ACn:The code for the ACn will be executed in the LC of
the EU in the CNC.It is responsible for setting the routing
protocol policies indicating how the routing table should be
built,i.e.what are the parameters for choosing one route over
another,how the routes will be maintained,when will a route
be purged,how to manage neighbors,etc.

CNC-CCn:The code for the CCn will be executed in the LC of
the EU in the CNC.Depending on the ACn requirements and
policies,CCn decides on the appropriate routing protocol to be
instantiated.The instantiation of a routing protocol depends
on the micro-protocols that implement routing-related tasks,
which are available in the library code-pool associated with
the EU (refer to the NC structure).Alternatively,a default
routing protocol can be adapted to the ACn requirements.
Since,in wireless environments,route definition depends
mainly on link characteristics,deciding on the link parameters
is part of the CCn responsibility.This introduces a cross
interest between CNC and RNC,which will be handled
by the FCn.Other communication tasks handled by the CCn
include resolving routes,sending and receiving route
requests and replies,communicating with neighbors,forward-
ing packets,etc.

CNC-RCn:The code for RCn will be executed in the RUCC of the
EU in the CNC,and it is responsible for estimating and
managing the resources assigned to the instantiated CNC.

CNC-FCn:Both the MU and the RU jointly implement the FCn.
The MU will be responsible for monitoring all performance
parameters,whether pertaining to the communication func-
tion assigned to the CNC,or related to the operation of the EU
in general.Monitoring parameters are specified once the CNC
get specialized,and are subject to adjustments/amendments if
required.The RU will constantly check the performance of the
communication related functions in specific,and the EU
operation in general,by comparing the values of the mon-
itored parameters to thresholds values defined in the RU.The
RU interferes to adjust either the communication related
functions,or the EU operation,if the monitored values fall
below the indicated thresholds.Furthermore,the RU will
decide on any optimizations required for improving perfor-
mance,and resolve any cross-interests that might rise among
the core-concerns within the CNC,or between the CNC and
other specialized NCs residing on the same Ncomp,or on
different Ncomps.In the latter case,RUs residing on all
involved NCs will collaborate to resolve the cross-interest.As
an example for cross-interest management within a CNC,
consider the case when the memory space required by the
routing table approaches the maximum space assigned to the
CNC.In such a case,the RU,after considering its knowledge-
database,could decide to either instruct the CCn to alter its
route-purging policy,or to command the RCn to request for
extra memory space.
We note that engineering protocols to conform to CAHN calls
for new methodologies and techniques beyond the scope of this
research work.The previous discussion was presented as an
illustration to clarify our vision of protocol development accord-
ing to the CORM-NC.
4.2.3.CAHN simulation setup
CAHNis evaluated by simulating a CAHN-based network in the
ns2 simulator (The Network Simulator ns-2).Our choice for ns2 as
our network simulator was driven by two factors:(1) our second
case study adopts the cross-layer power-adaptation algorithm,
presented in Kawadia and Kumar (2005),as an example for cross-
interests arising among different network concerns.Since the
algorithmwas simulated in ns2,we were inclined to use the same
simulator to compare CHAN behavior to that reported in Kawadia
and Kumar (2005,p.2) and it has been mentioned in Kurkowski
et al.that ns2 is the most commonly used simulator for MANET
research.Thus by adopting ns2 as our simulator,we can,as future
work,experiment with CHAN in different other scenarios pre-
sented in literature.
For each case study,the simulation is composed of two
scenarios;a baseline scenario,which uses legacy protocols
provided by the ns2,and CAHN scenario,which uses the modified
code that implements the CAHN-Ncomp as will be detailed
later.For each simulation scenario,we conduct 10 independent
replications seeding the random number generator (RNG) as
indicated in Umlauft and Reichl (2007) to ensure the absence of
correlations among the generated random numbers.For each set
of replications,we calculate the confidence interval at 95% using
In our simulations,each ns2 node is equivalent to a CAHN-
Ncomp.The transport functionalities are incorporated into the
ANC,while communication functionalities are incorporated into
the CNC.In ns2,power is the only system resource that could be
controlled in the simulation.Therefore,transmission power is the
only resource managed by the RNC.ANC reliable transport service
is simulated by adjusting the ns2 implementation for the one-
way TCP (Tahoe TCP) to accommodate simple regulation and
monitoring.The ns2 Tahoe TCP implementation reflects a simpli-
fied model of one-way data transfer of fixed size packets.This
model attempts to capture the essence of the TCP congestion and
error control behaviors,but it is not intended to be a faithful
replica of any real-world TCP implementation (Floyd).However,it
performs congestion control and round-trip time estimation
similar to the version of TCP released with the 4.3BSD ‘‘Tahoe’’
UNIX systemrelease fromUC Berkeley (ns documentation,2008).
The ANC reliable transport service incorporates the MAC conten-
tion level reported by the CNC in the calculation of the congestion
window size.Accordingly,ANC reliable transport service adjusts
its transmission rate according to the RRT,as well as the MAC
contention level realized along the path from the source to the
destination.Such calculation of the congestion window is similar
to that presented in Hamadani and Rakocevic (2008).To generate
a data flow,we attach an FTP application to the ANC reliable
transport service,to be referred to hereafter as ANC-TCP.For
unreliable transport service,we similarly adjust the code for the
UDP protocol and the constant bit rate CBR application provided
by ns2,and incorporate both into an ANC,to be referred to
hereafter as ANC-CBR.
NCs within the same CHAN-Ncomp interact directly,while NCs
residing on different CHAN-Ncomp interact through messaging.
Therefore,we introduce a newpacket type in ns2 (CHAN-packets)
to sustain remote-NC inter-communication.Performance para-
meters defined in each case studies are monitored,and their
values are communicated through CAHN-packets among the NCs,
to allowfor adaptation and cross-interest management (as will be
detailed later).
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680 675
4.2.4.Case study I:achieving flow stability through transport
protocol adaptation
Network stability is directly reliant on transport protocol
congestion control mechanisms.As mentioned in (Floyd),
‘‘network stability is frequently associated with rate fluctuations
or variance.Rate variations can result in fluctuations in router
queue size and therefore queue overflows.These queue overflows
can cause loss of synchronizations across coexisting flows and
periodic under-utilization of link capacity,both of which are
considered to be general signs of network instability.’’ Thus,one
way to achieve stability is to avoid network congestion by
correctly estimating network capacity,and adjusting the flow
rate accordingly.For legacy protocols,the flow rate is either
controlled by the TCP congestion window,or by the application
running over UDP.Optimally,for TCP,the congestion windowsize
should be proportional to the network capacity,derived as the
product of bandwidth and delay of links along the path.However,
TCP cannot estimate the optimal size for its congestion window,
but has to wait for losses to realize that the windowhas exceeded
the optimal size.Losses at the sender side can be realized either
proactively,by receiving a duplicate acknowledgment from the
receiver for the last received segment,or reactively,though a
round-trip time out (RTO).To calculate the latter,TCP estimates
the end-to-end round-trip time (RTT) taken by a segment and its
acknowledgment,and if an acknowledgment for a sent segment
fails to be received within the estimated RTT,then an RTO event
has occurred.In this respect,we note that TCP’s congestion
control is reactive;packets need to be lost before TCP can realize
congestion,as well as incognizant;congestion window size is
aggressively reduced regardless of the reasons imputed to packet
drops.For UDP,congestion control mechanism is not provided
leaving it to be solely implemented by the end running applica-
tions,which attempt to estimate the RTT to adjust the flowrate to
the available network capacity.Accordingly,RTT estimation is a
determinant factor in adjusting flow rates to network capacity,
and hence achieving network flow stability.
Network instability due to flow-rate fluctuations is exacer-
bated in ad hoc networks due to hop-by-hop MAC acknowl-
edgments at wireless links (Li et al.,2001).Hop-by-hop MAC
acknowledgments have a direct effect of artificially increasing the
RTT,which is translated by TCP and/or end applications over UDP
as an increase in network capacity.A direct consequence of a
large RTT would be an increase in the load induced into the
network.However,the increased load brings no gains in through-
put,but rather results in more packets being queued at inter-
mediate nodes along the path,intensifying the contention among
adjacent nodes (Hamadani and Rakocevic,2008).Eventually,
packets will be dropped due to repeated collisions resulting in
loss of synchronizations across coexisting flows,and periodic
under-utilization of link capacity—signs of network instability.
Therefore,we argue that,to attain stability,transport protocols
operating over ad hoc networks need to be aware of the conten-
tion level present at the wireless links,and thus adjust the rate of
data-flow accordingly.Thus,we identify the MAC contention-
level,and the rate of data-flow induced into the network by the
transport function,as the performance parameters to be mon-
itored and regulated in this case study (requirement specification
4).The MAC contention-level will be measured,as suggested in Li
et al.(2001),as the time required by the node to acquire the
channel,while the rate of data-flow will measured as the size of
congestion window for reliable transmission,and as the rate of
data transfer for streaming and real-time transmission. study I simulation design.Simulation scenarios will
incorporate both reliable and unreliable data transmission over a
chain topology composed of five nodes.The nodes are defined
such that each node can only access its direct neighbor(s).Chain
topologies have been reported to be challenging for legacy
transport protocols.This can be attributed to the incognizant
operation of legacy protocols that fail to adjust the data
transmission rate to the MAC contention level.Reliable data
transmission is induced using an FTP application,while a CBR
application is used for the unreliable transmission.In CAHN
scenario,the principal monitored parameter is the MAC
contention level realized at the wireless links.At each Ncomp,
the MAC contention level is measured for each direct link
connecting a one hop neighbor,as well as along the path
defined for the data flow from the source Ncomp to the
destination Ncomp.The MAC contention level will then be
considered by the ANC when calculating the rate of data-
transmission.For reliable transmission,the ANC-TCP will adjust
the congestion window according to the MAC contention level
reported.Similarly,the ANC-CBR will incorporate the contention
level in the packet transfer rate.
The total simulation time,for each scenario,is 1000 s.The
simulation scenario is composed of three stages each running for
300 s and two intervals.In Stage 1,a single reliable FTP data flow
(flow 1) runs over the chain topology from source (node 0) to
destination (node 4) for 300 s.Stage 2 introduces another reliable
FTP data flow (flow 2) from the same source to the same
destination for another 300 s.Then a wait interval (Interval 1)
for 100 s is introduced to allow flow 1 to stabilize.Stage
3 introduces an unreliable CBR data flow (flow 3) for another
200 s,after which flow 1 continues for another 100 s (Interval 2)
before the simulation ends.The last 3 s are introduced to allowfor
any in-flight packets to reach the destination.The CBR flow rate
has been adapted separately to the chain topology in a trial
simulation to ensure that any packet dropped is due to flow
interactions rather than the overload of the CBR traffic.The
simulation parameters are summarized in Table 1.
The performance evaluation metrics recorded in this simula-
tion are throughput,packet losses,and protocol stability.These
metrics are derived from the TMRG RFC 5166 (Floyd).Since we
are interested in the steady state performance,we excluded the
values recorded for the first 5 s,during which the TCP protocol
will be operating in slowstart phase for the baseline scenario,and
CAHN-Ncomps will be gathering monitoring data in the CAHN
scenario.For throughput and packet losses,we record the con-
gestion window size in packets (packets are 1000 bytes long),
total number of duplicate acknowledgments,and sink throughput
in Kbits/s for FTP transmission.As for CBR transmission,we
record packet losses and the overall throughput received by the
destination node in Kbits/s.By overall throughput we mean,the
total amount of data bits received at the node entry point (node
entry point refers to the network layer at the baseline scenario
and the CNC interface at the CHAN scenario).Thus the overall
throughput includes both CBR application data plus the unreliable
Table 1
Case study I simulation parameters.
No.of nodes 5 nodes arranged in chain topology
Area (m
) 12001200
Transmission range (m) 250
Propagation model Two ray ground
Wireless link bandwidth (Mb/s) 2
MAC protocol CMU extension
Routing protocol AODV
Transport protocols Tahoe TCP,UDP
Application inducing packet flows CBR over UDP,FTP over TCP
No.of flows 3 (2 FTP and 1 UDP)
CBR packet flow rate (Kbits/s) 110
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680676
transport protocol data.For stability,we calculate the coefficient
of variation (CoV) for all data flows received at the receiver.For
these four metrics we report the grand average (Av),the standard
deviation (Std) and 95% confidence interval (CI) calculated for
each simulation stage over all replications in Tables 2–5. results and analysis
(1) In stages 1 and 2,as shown in Table 2 and Table 3,CAHN data
flows recorded better sink throughput,and lower duplicate
acknowledgments and packet loss,than the baseline flows,in
spite of the smaller congestion window size of the former.
This indicates better network capacity estimation and
resource management for CAHN networks
(2) In stage 2,CAHN-Ncomps maintained fairness among flow
1 and flow2,as the average throughput of both flows and the
throughput confidence interval is within close range.Thus
regulating data flows to network capacity did not affect the
fair allocation of bandwidth among simultaneous flows.
(3) In stage 3,CAHN data throughput outperforms baseline data
flows.As shown in Tables 4 and 5,CHAN data flow 1 and
3 evenly shared the network capacity,achieving similar
throughput as indicated by the throughput average and
confidence interval.However,for the baseline simulation,
the UDP protocol acquires most of the bandwidth,leaving a
very narrow strand for TCP (8 Kbits/s for TCP average
throughput versus 113.66 Kbits/s for UDP average through-
put).Furthermore,for lost packets,flow 3 in CAHN scenario
recorded less than that in the baseline scenario.However,for
flow 1 the number of duplicate acknowledgments in CAHN
are more than that recorded in the baseline scenario—but
taking into consideration that the throughput of the latter is
far below the former,this difference is expected.
(4) For flow variance,CAHN reliable data flows recorded lower
CoV than equivalent baseline flows in all three stages.As for
unreliable flows,we note that for the baseline scenario,the
flowrate for the CBR application running over legacy UDP has
been previously adjusted to the network capacity.Therefore,
in stage 3,being oblivious to any accompanying flows,legacy
UDP flow takes over most of the available bandwidth thus
achieving very low rate variation indicated by the low CoV
average and confidence interval.On the other hand,due to
monitoring and regulation in CAHN,flow 3 is aware of the
accompanying flow 1,and therefore the network bandwidth
is shared between the two flows—resulting in slightly higher
Table 2
Stage 1 performance parameters.
Performance parameters Baseline flow 1 CAHN flow 1
Congestion window size (packets) Av 5.57 3.06
Std 0.11 0.11
CI [5.65,5.5] [3.14,2.98]
Sink throughput (Kbits/s) Av 136.09 156.23
Std 1.62 0.54
CI [137.25,134.94] [156.62,155.85]
Duplicate acknowledgments
(ACK packets)
Av 381 19
Std 38 5
CI [408,345] [22.15]
CoV Av 0.48 0.11
Std 0.02 0.01
CI [0.49,0.46] [0.12,0.10]
Table 3
Stage 2 performance parameters.
Performance parameters Baseline flow 1 CHAN flow 1 Baseline flow 2 CAHN flow 2
Congestion window size (packets) Av 3.83 2.37 3.92 2.25
Std 0.44 0.06 0.48 0.05
CI [4.15,3.51] [2.41,2.33] [4.261,3.58] [2.29,2.21]
Sink throughput (Kbits/s) Av 66.39 73.28 67.4 75.05
Std 12.59 1.33 12.84 1.76
CI [75.39,57.38] [74.23,72.32] [76.58,58.21] [76.31,73.79]
Duplicate acknowled-gements (ACK packets) Av 376 95 358 92
Std 66 21 137 13
CI [424,329] [110.81] [456,260] [102.83]
CoV Av 0.98 0.44 0.96 0.43
Std 0.16 0.02 0.15 0.02
CI [1.1,0.87] [0.46,0.42] [1.07,0.85] [0.45,0.42]
Table 4
Stage 3 performance parameters.
Performance parameters Baseline flow 1 CAHN flow 1
Congestion window size (packets) Av 2.21 2.35
Std 0.21 0.8
CI [2.36,2.06] [2.41,2.3]
Sink throughput (Kbits/s) Av 8.04 69.11
Std 1.53 4.09
CI [9.14,6.95] [72.04,66.19]
Duplicate acknowledgments
(ACK packets)
Av 63.6 71.2
Std 45.34 11.24
CI [96.03,31.17] [79.24,63.16]
CoV Av 2.57 0.45
Std 0.3 0.07
CI [2.79,2.36] [0.5,0.4]
Table 5
Stage 3 performance parameters.
Performance parameters Baseline flow 3 CAHN flow 3
Throughput (Kbits/s) Av 113.66 69.25
Std 1.25 2.5
CI [114.55,112.76] [71.0,67.5]
Packet loss (Kbits/s) Av 9.23 5
Std 2.01 0.94
CI [10.67,7.79] []
CoV Av 0.15 0.27
Std 0.02 0.3
CI [0.16,0.13] [0.29,0.25]
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680 677
throughput fluctuation for CAHN flow 3 UDP when compared
to the equivalent baseline flow.
To conclude,the presented results demonstrate that
(1) Ad hoc networks derived according to the CHAN architecture
fit the ad hoc network requirements better than networks
derived according to the legacy layered model.
(2) Networks components constructed based on the CORM-NC
model are capable of intrinsic monitoring and regulation of
the performance parameters defined by the requirement
(3) Networks components constructed based on the CORM-NC
model possess adaptability to fit their operational context.
(4) Networks derived according to CORM-based architectures
achieve global stability in spite of alterations occurring at
the micro level
4.2.5.Case study II:managing cross-interests among network core
In the legacy Internet model,the need for cross-interest
management was unforeseen until the proliferation of wireless
communications,which led,among other factors,to several cross-
layer design proposals.However,it was argued that cross-layer
designs apply localized optimizations without considering the
possible adverse consequences at the global scope,thus causing
performance deterioration rather than performance gains.In
CORM,cross-interest management is acknowledged by defining
the FCn in the ACRF framework,and the MU and RU in the NC.In
this case study,we specify cross-interest management as an
additional requirement for CAHN.Accordingly,we will revisit
the architectural definition of CAHN to address this newly
specified requirement,and simulate a CAHN-based network to
evaluate its compliance with CAHN’s requirement specifications. CAHN’s architectural definition.We extend
CAHN’s requirements to include cross-interest management as
Cross-interest management:CAHN should provide a systema-
tic way for dealing with cross interests among the supported
network functionalities.
Accordingly we redefine CAHN-Ncomp to be composed of four
CORM-NCs;the three previously defined core-concern NCs (ANC,
CNC,RNC),and a federation NC (FNC) for cross-interest manage-
ment.Using EBNF,CAHN-Ncomp will be represented as follows


Although the FCn exits within each of the core concern NCs by
definition through the existence of the MU and the RU,yet,we
define FNC as a dedicated NC to handle sophisticated interaction
management.Therefore,as indicated in CAHN’s architectural
definition,FNC instantiation is optional. study II simulation design.In ad hoc networks,power
is a valuable resource that affects the overall operation of the
network.Therefore,it needs to be efficiently managed.From the
ACRF perspective,power is managed by the RCn,it defines
communication paths at the CCn,and it affects performance and
optimization decisions at the ACn.Therefore,power-management
presents itself as a typical case of a cross-concern problem that
needs to be addressed from different perspectives.
The simulation scenario of this case study is adapted from
Kawadia and Kumar (2005).In Kawadia and Kumar (2005),the
authors deliberately contrived a cross-layer power-management
algorithm to show the pitfalls of cross-layer adaptations that apply
unalterable optimizations at a local scope without considering the
effect at the global scope.The algorithm tunes the transmission
power of a node according to the number of surrounding neighbors in
an effort to minimize MAC contention,while maintaining network
connectivity in ad hoc networks.The algorithm integrates the
operation of the Network,MAC,and Physical layers and was proven
to be worthy for UDP traffic.However,it was shown in Kawadia and
Kumar (2005) that applying this power-adaptation algorithm to TCP
traffic resulted in TCP throughput fluctuation due to network oscilla-
tion between connectivity and dis-connectivity,hence adversely
affecting the TCP performance.
In this case study,we will adopt the same simulation setting
and parameters as defined in Kawadia and Kumar (2005),with
the minor change of using AODV protocol with link-layer detec-
tion instead of DSDV protocol with Hello messages.The topology
is made of 23 nodes arranged in parallel columns in an area of
500500 m
as shown in Fig.5.The cross-layer power-manage-
ment algorithm is active throughout the simulation for both the
baseline and CAHN-network scenarios.The algorithm is made of
two nested loops.The outer loop chooses an upper bound on the
number of neighbors that each node should have,which is
referred to as target degree.The inner adaptation loop adjusts
the power level at the physical layer to achieve the previously set
target degree.Each node has 6 transmit power levels correspond-
ing to 50 m,90 m,130 m,170 m,210 m,and 250 m.The outer
loop sets the target degree once every 90 s.The target degree
starts at 2 and is increased when the recorded throughput at the
network layer is zero;signaling a disconnected network,or as
long as the recorded throughput is increasing.The inner loop sets
the transmission power every 15 s to adjust the number of
neighbors to the preset target degree.One TCP flow is active
from node 0 to node 3.The simulation runs for 500 s.
For the CAHN-network scenario,each node has the configura-
tion of the extended CAHN-Ncomp with an FNC.The performance
parameters (requirement specification 4) identified in this case
study are;the power transmission level at the RNC;the MAC
contention level,the next hop neighbor,and the number of data
packets received at the CNC;and,the recorded throughput at the
ANC measured as TCP congestion window size.CAHN-network
scenario is divided into two phases,250 s each.Phase 1 is an
adaptation-learning phase during which the cross-layer power-
adaptation algorithm controls the transmission power.Mean-
while,the ANC at the source adjusts the TCP window according
to the path capacity estimated by CNCs along the path (as
explained in case study I).Concurrently,MU at each NC,records
values assumed by the specified performance parameters.In
Phase 2,the FNC on each Ncomp picks the optimal values for
the performance parameters (i.e.,the power level at which least
MAC contention was recorded,the corresponding next hop
neighbor on the route from source to destination,and,at the
source,the corresponding TCP congestion window size).The FNC
sustains these values,overriding any changes attempted by the
cross-layer power-management algorithm,AODV,or TCP. results and analysis.Table 6 compares the
performance of the baseline scenario and the CAHN-network
Fig.5.Node setup for case study II (Kawadia and Kumar 2005).
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680678
scenario expressed in terms of the TCP congestion window size in
packets and sink throughput in Kbits/s,for which we report the
grand average (Av) and 95% CI.Figures 6 and 7 showthe oscillation
of the TCP congestion windowand the resulting sink throughput for
one of the baseline scenario replications,while Figs.8 and 9 show
the corresponding CAHN-ANC reliable transport congestion window
and resulting sink throughput.Comparing the graphs in Figs.6 and 7
to that in Figs.8 and 9,we note that in
(1) In phase 2,FNC was able to manage cross-interests arising
between different concerns.The FNC at the CAHN-Ncomp was
able to learn optimal performance values in Phase 1,and
incorporate this knowledge at run time into CAHN-network
operation in Phase 2,thus stabilizing performance in a highly
variable context.
(2) If we define protocol evolution in the context of computer
networks as a higher form of intelligent protocol adaptation in
response to context changes by accounting on previously
recorded knowledge from past experiences.then we argue that
the FNC in the CAHN scenario has achieved a level of
evolutionary behavior.
(3) In phase 1,before the involvement of the FNC operation,the
throughput achieved by CAHN networks showfar less oscilla-
tion than that achieved in the baseline scenario,in spite of the
execution of the power-adaptation algorithm.This shows that
with minimum monitoring and regulation CAHN-networks
performbetter than legacy networks in unstable contexts due
to monitoring and regulation capabilities of the former.
In this paper,we presented CORM,our proposed concern-
oriented reference model for architecting future computer net-
works.CORM is bio-inspired,it incorporates SE concepts,and it
coincides with the FBS engineering model.CORMis based on two
ground principles that conceive the network as a software-
dependent CAS.CORM refutes the long-endorsed concept of
layering by defining the CORM-NC as the new network abstrac-
tion unit,from which networks can be recursively built.Thus
CORMintrinsically accounts for monitoring and regulation,which
are fundamental characteristics for emergent behavior,and
ensures network integrity and stability.
In this paper,we have also demonstrated a stepwise process for
the derivation of a network-architecture from CORM by deriving
CAHN,a CORM-based architecture for ad hoc networks.To derive
CAHN,we started by specifying the network requirements including
operational context,performance requirements and constraints.
Then,we proceeded to define CAHN using CORM abstractions.
CAHN behavior was evaluated through simulations,which illu-
strated the adaptability and cross-interest management ability of
CAHN-networks.Furthermore,in the second case study CAHN’s
Table 6
Basic power scenario vs.CellNet scenario.
Performance parameters Baseline power Sc.TCP CAHN Sc.ANC reliable transport
Phase I Phase 2
Congestion window size (packets) Av 5.46 2.86 2.53
CI [6.19,4.74] [] []
Sink throughput (Kbits/s) Av 107.25 164.15 175.12
CI [131.34,83.16] [189.51,138.79] [210,140.24]
Fig.6.Baseline power scenario TCP congestion window oscillations.
Fig.7.Baseline power scenario sink throughput oscillations.
Fig.8.CAHN scenario ANC congestion window.
Fig.9.CAHN scenario sink throughput.
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680 679
specifications was extended demonstrating (1) the expressiveness of
CORM constructs in conveying requirements;(2) and the flexibility
of CORM-based architectures to incorporate new requirements.
We believe that CORM poses several open research issues,a
subset of which is mentioned in the following list:

Architectural representation of performance requirements/
constraints:we need to devise a schema to clearly express
the performance requirements and/or constraints required by
any derived architecture.

Network system profiling:a major constituent of DR is devis-
ing network knowledge-base (NKB) schema for profiling net-
work entities,protocols,and interactions.Entities can be
represented in terms of their capabilities and resources,pro-
tocols can be expressed in terms of their performance para-
meters,and interactions can be modeled in terms of patterns
they form (out of which behavioral descriptors can be identi-
fied).Out of these elements,key performance indicators (KPI)
will be extracted for measurements and evaluation.

Network system profile representation:devise a representation
schema for the NKB to be the base for interface design and
engineering.The schema needs to address the variability,diver-
sity,and immensity of information existing within the NKB.The
representation schema is required to address the need for
information abstraction and hiding to handle complexity without
disrupting systemawareness and sensitivity to context.Further-
more,the representation schema needs to be expressive with no
implicit assumptions about the information format or contents.

Devise network evaluation schema:what cannot be measured
cannot be evaluated.Accordingly,elements of the derived NKB
need to be quantified and measured.Thus,calibration proce-
dures need to be devised.

Designing DC:a plethora of proposals have addressed the
challenging area of network routing in terms of naming
addressing and forwarding functions.In CORM,we need to
tackle these topics from a FBS perspective taking into con-
siderations DR,derived NKB,representation schema,beha-
vioral as well as structural aspect of CAS.
Ben-Jacob E,Levine H.,Self-engineering capabilities of bacteria.Journal of Royal
Clark D.In:Proceedings of the NSF future internet summit.meeting summary.
Washington,DC October 12–15,2009.Version 7.0 of January 5,2010.
Cerf V,Kahn R.A protocol for packet network intercommunication.IEEE Transac-
tions on Communications 1974;Com-22(No 5).
Feldmann A.Internet clean-slate design:what and why?ACM SIGCOMM Com-
puter Communication Review 2007;37(3).
Floyd S.editor.Metrics for the evaluation of congestion control mechanisms.RFC
Gacemi1 A,Senail A,Oussalah M.Separation of concerns in software architecture
via a multiviews description.In:Proceedings of the 2004 IEEE international
conference on information reuse and integration;2004.
Gero JS.Design prototypes:a knowledge representation schema for design.AI
Magazine 1990;11(4):26–36.
Greenberg,et al.A clean slate 4D approach to network control and management
A.ACM SIGCOMM CCR 2005;35(5).
Hamadani E,Rakocevic V.A cross layer solution to address TCP intra-flow
performance degradation in multihop ad hoc networks.Journal of Internet
Engineering 2008;2(1).
Hassan H,Eltarras R,Eltoweissy M.Towards a framework for evolvable network
design collaborate com 2008a.Orlando,FL,USA;November 2008.
Hassan H,Eltoweissy M,Youssef M.Towards a federated network architecture.In:
Proceedings of the IEEE INFOCOM 2008b,IEEE conference on computer
communications workshops,p.1–4.
Hassan H et al.CellNet:a bottom–up approach to network design.In:Proceedings
of the NTMS 2009 third international conference on new technologies,
mobility and security,Cairo,Egypt;December 2009.
Kawadia V,Kumar PR.A cautionary perspective on cross layer design.IEEE
Wireless Communications 2005;12(1):3–11.
Kruchten P.Casting software design in the function–behavior–structure frame-
work.Journal IEEE Software 2005;22(2).
Kurkowski S,Camp T,Colagrosso M.MANET simulation studies:the incredibles.
Mobile Computing and Communications Review 9(4).
Li J,Blake C,De Couto DSJ,Lee HI,Morris R.Capacity of ad hoc wireless networks.
In:Proceedings of the ACM MOBICOM;2001.
Mili H,Elkharraz A,Mcheick H.Understanding separation of concerns./http://;2004.
Mogul J.Emergent (mis)behavior vs.complex software systems.EuroSys’06,
Leuven,Belgium;April 18–21,2006.
Mitchell M.Complex systems:network thinking.Artificial Intelligence 2006;170(18):
ns documentation./;
Polack F et al.Complex systems models:engineering simulations.In:ALife XI.MIT
Reference Model for Service Oriented Architecture 1.0.Committee specification 1;
2 August 2006./
Rihani S.Nonlinear systems./
Saltzer J,Reed D,Clark D.End-to-end arguments in system design.ACM Transac-
tions on Computer Systems 1984;2(4):277–88.
Software Architecture:central concerns,key decisions architecture resources for
enterprise advantage./http://www.bredemeyer.comS;2002.
The Network Simulator – ns-2,
Umlauft M,Reichl P.Experiences with the ns-2 network simulator—explicitly
setting seeds considered harmful.In:Proceedings of the wireless telecommu-
nications symposium WTS 2007,Pomona,California,USA.
Zhang Z,Jia L,Chai1 Y,Guo M.A study on the elementary control methodologies
for complex systems.In:Proceedings of the control and decision conference;
Zimmermann H.OSI reference model—the IS0 model of architecture for open
systems interconnection.IEEE Transactions on Communications 1980;com
H.Hassan,M.Eltoweissy/Journal of Network and Computer Applications 35 (2012) 668–680680