Queueing theory - Gurukul College Kota

mustardpruneΔίκτυα και Επικοινωνίες

23 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

141 εμφανίσεις




Shashank Agnihotri




Computer Networks



Page
1

Queueing theory

Queueing theory

is the mathematical study of waiting lines, or

queues
. The theory enables
mathematical analysis of several related processes, including arriving at the (bac
k of the) queue,
waiting in the queue (essentially a storage process), and being served at the front of the queue.
The theory permits the derivation and calculation of several

performance measures

including the
average waiting time in the queue or the system, the expected number waiting or receiving
service, and the probability of encountering the system in certain states, such as empty, full,
having an available server
or having to wait a certain time to be served.

Queueing theory has applications in diverse fields,
[1]

including

telecommunications
,
[2]

traffic
engi
neering
,

computing
[3]

and the design of factories, shops, offices and hospitals.
[4]


Overview

The word

queue

comes, via

French
, from the

Latin

cauda
, meaning tail. The spelling "queueing"
over "queuing" is typically encountered in the academic research field. In fact, one of the flagship
journals of the profession is named "Queueing Systems".

Queueing theory is generally consi
dered a branch of

operations research

because the results
are often used when making business decisions about the resources needed to provide service.
It is applicabl
e in a wide variety of situations that may be encountered in business, commerce,
industry, healthcare,
[5]

public service and engineering. Applications are frequently encountered
in

customer service

situations as well as

transport

and

telecommunication
. Queueing theory is
directly applicable to

intelligent transportation systems
,

call
centers
,

PABXs
,
networks
,

telecommunications
,

server

queueing,

mainframe

computer

of
telecommunications terminals, advanced telecommunications systems, and

traffic flow
.

Notation for describing the characteristics of a

queueing model

was first suggested by

David G.
Kendall

in 1953.

Kendall's notation

introduced an A/B/C queueing notation that can be found in
all standa
rd modern works on queueing theory, for example, Tijms.
[6]

The A/B/C notation designates a queueing system having A as interarrival time distribution, B as
service time dis
tribution, and C as number of servers. For example, "G/D/1" would indicate a
General (may be anything) arrival process, a Deterministic (constant time) service process and a
single server. More details on this notation are given in the article about

queueing models
.

History

Agner Krarup Erlang
, a

Danish

engineer who worked for the Copenhagen Telephone Exchange,
published the first paper on queueing theory in 1909.
[7]




Shashank Agnihotri




Computer Networks



Page
2

David G. Kendall

introduced an A/B/C queueing notation in 1953. Important work on queueing
theory used in modern

p
acket switching
networks was performed in the early 1960s by

Leonard
Kleinrock
.


Application to telephony

The

public switched telephone network

(
PSTN
) is designed to accommodate the offered traffic
intensity with only a small loss. The
performance

of loss systems is quantified by their

grade of
service
, driven by the assumption t
hat if sufficient capacity is not available, the call is refused
and lost.
[8]

Alternatively, overflow systems make use of

alternative routes

to divert calls via
different paths


even these systems have a finite traffic carrying capacity.
[8]

However, the

use of queueing in PSTNs allows the systems to queue their customers' requests
until free resources become available. This means that if traffic intensity levels exceed available
capacity, customer's calls are not lost; customers instead wait until they c
an be served.
[9]

This
method is used in queueing customers for the next available operator.

A queueing discipline determines the manner in which the exchange handles calls f
rom
customers.
[9]

It defines the way they will be served, the order in which they are served, and the
way in which resources are divided among the customers.
[9]
[10]

Here are details of four queueing
disciplines:


First in first out


This principle states that customers are served one at a time and that the customer that has
been waiting the longest is served first.
[10]


Last in first out



This principle also serves customers one at a time, however the customer with the shortest
waiting time will be served fi
rst.
[10]

Also known as a

stack
.


Processor sharing


Customers are serve
d equally. Network capacity is shared between customers and they all
effectively experience the same delay.
[10]


Priority


Customers with high priority are served firs
t.
[10]





Shashank Agnihotri




Computer Networks



Page
3

Queueing is handled by control processes within exchanges, which can be modelled using state
equations.
[9]
[10]

Queueing systems use a particular form of

state equatio
ns

known as a

Markov
chain

that models the system in each state.
[9]

Incoming traffic to these sy
stems is modelled via
a

Poisson distribution

and is subject to Erlang’s queueing theory assumptions viz.
[8]



Pure
-
chance traffic



Call arrivals and departures are random and independent events.
[8]



Statistical equilibrium



Probabilities within the syst
em do not change.
[8]



Full availability



All incoming traffic can be routed to any other customer within the
network.
[8]



Congestion is cleared as soon as servers are free
.
[8]

Classic queueing theory involves complex calculations to determine waiting time, serv
ice time,
server utilization and other metrics that are used to measure queueing performance.
[9]
[10]


Queueing networks

Networks of queues are systems which contain an arbitrary, but finite, number

m

of queues.
Customers, sometimes of different classes,
[11]
trave
l through the network and are served at the
nodes. The state of a network can be described by a vector

, where

k
i

is the numbe
r
of customers at queue

i
. In open networks, customers can join and leave the system, whereas in
closed networks the total number of customers within the system remains fixed.

The first significant results in this area were

Jackson networks
, for which an efficient

product
form equilibrium distribution

exists and the

mean value analysis

which allows average metrics
such as throughput and sojourn times to be computed.
[12]


Role of Poisson process, exponential distributions

A useful queueing model represents a real
-
life system with sufficient accuracy and is analytically
tractable. A queueing model based on the
Poisson process

and its companion exponential
probability distribution often meets these two requirements. A

Poisson process

models random
events (such as a

customer arrival, a request for action from a web server, or the completion of
the actions requested of a web server) as emanating from a

memoryless

process. That is, the
leng
th of the time interval from the current time to the occurrence of the next event does not
depend upon the time of occurrence of the last event. In the

Poisson proba
bility distribution
, the
observer records the number of events that occur in a time interval of fixed length. In the
(negative)

exponential probability dist
ribution
, the observer records the length of the time interval
between consecutive events. In both, the underlying physical process is memoryless.




Shashank Agnihotri




Computer Networks



Page
4

Models based on the

Poisson

process

often respond to inputs from the environment in a manner
that mimics the response of the system being modeled to those same inputs. The analytically
tractable models that result yield both information about the system being modeled and the form
o
f their solution. Even a queueing model based on the

Poisson process

that does a relatively
poor job of mimicking detailed system performance can be useful. The fact that suc
h models
often give "worst
-
case" scenario evaluations appeals to system designers who prefer to include
a safety factor in their designs. Also, the form of the solution of models based on the Poisson
process often provides insight into the form of the solu
tion to a queueing problem whose detailed
behavior is poorly mimicked. As a result,

queueing models

are frequently modeled as

Poisson
processes

through the use of the

exponential distribution
.


Limitations of queueing theory

The assumptions of classi
cal queueing theory may be too restrictive to be able to model real
-
world situations exactly. The complexity of production lines with product
-
specific characteristics
cannot be handled with those models. Therefore specialized tools have been developed to
s
imulate, analyze, visualize and optimize time dynamic queueing line behavior.

For example; the mathematical models often assume infinite numbers of customers, infinite
queue capacity, or no bounds on inter
-
arrival or service times, when it is quite apparen
t that
these bounds must exist in reality. Often, although the bounds do exist, they can be safely
ignored because the differences between the real
-
world and theory is not statistically significant,
as the probability that such boundary situations might oc
cur is remote compared to the expected
normal situation. Furthermore, several studies show the robustness of queueing models outside
their assumptions. In other cases the theoretical solution may either prove intractable or
insufficiently informative to be

useful.

Alternative means of analysis have thus been devised in order to provide some insight into
problems that do not fall under the scope of queueing theory, although they are often scenario
-
specific because they generally consist of computer

simulations

or analysis of experimental
data. See

network traffic simulation
.




Shashank Agnihotri




Computer Networks



Page
5

Birth

deat
h process

The

birth

death process

is a special case of

continuous
-
time Markov process

where the states
represent the current size of a populatio
n and where the transitions are limited to births and
deaths. Birth

death processes have many applications in

demography
,

queueing
theory
,
performance engineering
, or in

biology
, for example to s
tudy the evolution of

bacteria
.

When a birth occurs, the process goes from state

n

to

n

+

1. When a death occurs, the process
goes from state

n

to state

n



1. The process is specified by
birth rates


and death
rates

.



Examp
les

A

pure birth process

is a birth

death process where


for all

.

A

pure death process

is a birth

death process where


fo
r all

.

A (homogeneous)

Poisson process

is a

pure birth process where


for all


M/M/1 model

and

M/M/c model
, both used in

queueing theory
, are birth

death processes used
to describe customers in an infinite queue.


Use in queueing theory

In queueing theory the birth

death process is the most fundamental example of a

queueing
model
, the

M/M/C/K/
/FIFO

(in complete
Kendall's notation
) queue. This is a queue with
Poisson arrivals, drawn from an infinite population, and

C

servers with

exponentially
distributed
service time with

K

places in the queue. Despite the assumption of an infinite
population this model is a good model for various telecommunication systems.


M/M/1

queue

The

M/M/1

is

a single server queue with an infinite buffer size. In a non
-
random environment the
birth

death process in queueing models tend to be long
-
term averages, so the average rate of



Shashank Agnihotri




Computer Networks



Page
6

arrival is given as


and the average service time as

. The birth and death

process is
a
M/M/1

queue when,


The

difference equations

for the

probability

that the system is in state

k

at time

t

are,




M/M/C

queue

The

M/M/C

is multi
-
server queue with C servers and an infinite buffer. This

differs from
the

M/M/1

queue only in the service time which now becomes,


and


with



M/M/1/K

queue

The

M/M/1/K

queue is
a single server queue with a buffer of size

K
. This queue has applications
in telecommunications, as well as in biology when a population has a capacity limit. In
telecommunication we again use the parameters from the

M/M/1

queue with,




In biology, particularly the growth of bacteria, when the population is zero there is no ability to
grow so,


Additionally if the capacity represents a limit where the population dies from over population,


The differential equations for the probability that the system is in state

k

at time

t

are,






Shashank Agnihotri




Computer Networks



Page
7



Equilibrium

A queue is said to be in equilibrium if the limit


exists. For this
to be the case,


must be zero.

Using the M/M/1 queue as an example, the steady state (equilibrium) equations are,




If


and


for all


(the homogenous case), this can be reduced to



Limit behaviour

In a small time

, only three types of tra
nsitions are possible: one death, or one birth, or no
birth nor death. If the rate of occurrences (per unit time) of births is


and that for deaths is

,
then the probabilities of the above transitions are

,

, and

respectively.
For a population process, "birth" is the transition towards increasing the population by 1 while
"death" is the transition towards decreasing the

population size

by 1.




Shashank Agnihotri




Computer Networks



Page
8









Shashank Agnihotri




Computer Networks



Page
9

Protocol

In information technology, a protocol is the special set of rules that end points in a
telecommunication connection use when they communicate.

Protocols specify interactions
between the communicating entities.


Protocols exist at several levels in a telecommunication connection. For example, there are
protocols for the data interchange at the hardware device level and protocols for data
intercha
nge at the application program level. In the standard model known as Open Systems
Interconnection (
OSI
), there are one or more protocols at each layer in the telecommunication
exchange t
hat both ends of the exchange must recognize and observe. Protocols are often
described in an industry or international standard.



Networking Tutorials

and Guides



Telecom Routing and Switching



IP Telephony Systems

The

TCP/IP


Internet protocols, a common example, consist of:



Transmission Control Protocol (TCP), which uses a set of rules to exchange messages
with other Internet points at the inform
ation packet level



Internet Protocol

(IP), which uses a set of rules to send and receive messages at the
Internet address level



Additional protocols that include

the Hypertext Transfer Protocol (HTTP) and

File Transfer
Protocol

(FTP), each with defined sets of rules to use with corresponding programs elsewhere
on the Intern
et

There are many other Internet protocols, such as the Border Gateway Protocol (
BGP
) and the
Dynamic Host Configuration Protocol (
DHCP
).

The word

protocol

comes from the Greek

protocollon
, meaning a leaf of paper glued to a
manuscript volume that describes the contents.




Shashank Agnihotri




Computer Networks



Page
10


OSI model

OSI model

7.

Application layer

NNTP



SIP



SSI



DNS



FTP



Gopher



HTTP



NFS



NTP



SMPP



SMTP


SN
MP



Telnet



DHCP



Netconf



RTP



SPDY



(more)

6.

Presentation layer

MIME



XDR



TLS



SSL

5.

Session layer

Named pipe



NetBIOS



SAP



PPTP


SOCKS

4.

Transport layer

TCP



UDP



SCTP



DCCP



SPX

3.

Network layer

IP

(
IPv4
,

IPv6
)



ICMP



IPsec



IGMP


IPX



AppleTalk

2.

Data li
nk layer

ATM



SDLC



HDLC



ARP



CSLIP


SLIP



GFP



PLIP



IEEE

802.2



LLC


L2TP



IE
EE

802.3



Frame

Relay



ITU
-
T

G.hn

DLL



PPP



X.25



Network switch

1.

Physical layer

EIA/TIA
-
232



E I A/TI A
-
449



I TU
-
T

V
-
S e r i e s



I.430



I.431



POTS



PDH


SONET/SDH



PON



OTN



DSL


IEEE

802.3



IE
EE

802.11



IEEE

802.15



IEEE

802.16






Shashank Agnihotri




Computer Networks



Page
11


IEEE

1394



ITU
-
T

G.hn

PHY



USB



Bluetooth



Hubs


The

Open Syst ems I nt er connect i on ( OSI ) model

i s a pr oduct of t he

Open Syst ems
Int er connect i on

ef f or t at t he

Int er
nat i onal Or gani zat i on f or St andar di zat i on
. It i s a pr escr i pt i on of
char act er i zi ng and st andar di zi ng t he f unct i ons of a

communi cat i ons syst em

i n t er ms
of
abstraction layers
. Similar communication functions are grouped into logical layers. A layer
serves the layer above it and is served by the layer below it.

For example, a layer th
at provides error
-
free communications across a network provides the
path needed by applications above it, while it calls the next lower layer to send and receive
packets that make up the contents of that path. Two instances at one layer are connected by a
horizontal connection on that layer.


History

Work on a layered model of network architecture was started and the

International

Organization
for Standardization

(ISO) began to develop its OSI framework architecture. OSI had two major
components: an

abstract model

of networking, called the Basic Reference Model or seven
-
layer
model, and a set of specific protocols.

The concept of
a seven
-
layer model was provided by the work of
Charles Bachman
, Honeywell
Information Services. Various aspects of OSI design evolved from experiences with the
ARPANET, the f
ledgling Internet, NPLNET, EIN,

CYCLADES
network and the work in IFIP
WG6.1. The new design was documented in ISO 7498 and its various addenda. In this model, a



Shashank Agnihotri




Computer Networks



Page
12

networking system was divide
d into layers. Within each layer, one or more entities implement its
functionality. Each entity interacted directly only with the layer immediately beneath it, and
provided facilities for use by the layer above it.

Protocols enabled an entity in one host t
o interact with a corresponding entity at the same layer
in another host. Service definitions abstractly described the functionality provided to an (N)
-
layer
by an (N
-
1) layer, where N was one of the seven layers of protocols operating in the local host.

T
he OSI standards documents are available from the ITU
-
T as the X.200
-
series of
recommendations.
[1]

Some of the protocol specifications were also available as part of the ITU
-
T
X series
. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO,
but only some of them without fees.
[2]


Description of OSI layers

According to recommendation X.20
0, there are seven layers, labeled 1 to 7, with layer 1 at the
bottom. Each layer is generically known as an N layer. An "N+1 entity" (at layer N+1) requests
services from an "N entity" (at layer N).

At each level, two entities (N
-
entity peers) interact by

means of the N protocol by
transmitting

protocol data units

(PDU).

A

Service Data
Unit

(SDU) is a specific unit of data that has been passed down from an OSI
layer to a lower layer, and which the lower layer has not yet encapsulated into a protocol data
unit (PDU). An SDU is a set of data that is sent by a user of the services of a giv
en layer, and is
transmitted semantically unchanged to a peer service user.

The PDU at a layer N is the SDU of layer N
-
1. In effect the SDU is the 'payload' of a given PDU.
That is, the process of changing an SDU to a PDU, consists of an encapsulation proc
ess,
performed by the lower layer. All the data contained in the SDU becomes encapsulated within
the PDU. The layer N
-
1 adds headers or footers, or both, to the SDU, transforming it into a PDU
of layer N
-
1. The added headers or footers are part of the proc
ess used to make it possible to
get data from a source to a destination.

OSI Model


Data unit

Layer

Function


Host

layers

Data

7.

Application

Network process to applica
tion


6.

Presentation

Data representation, encryption and decryption,
convert machine dependent data to machine




Shashank Agnihotri




Computer Networks



Page
13

independent data

5.

Session

Interhost communication, managing sessions
between applications


Segments

4.

Transport

End
-
to
-
end connections, reli
ability and

flow control


Media

layers

Packet
/
Datagram

3.

Network

Path determination and
logical addressing


Frame

2.

Data link

Physical addressing


Bit

1.

Physical

Media, signal and binary transmission


Some orthogonal aspects, such as management and security, involve every layer.

Security services

are not related to a specific layer: they can be related by a number of layers,
as defined by

ITU
-
T

X.800 Recommendation.
[3]

These services are aimed to improve the

CIA triad

(
confidentiality
,

integrity
, and

avai
lability
) of
transmitted data. Actually the availability of communication service is determined by

network
design

and/or

network management

protocols. Appropriate choices for these are needed to
protect against

denial of service
.


Layer 1: physical layer

Th
e

physical layer

defines

electrical

and physical specifications for devices. In particular, it
defines t
he relationship between a device and a
transmission medium
, such as a copper or

fiber
optical cable
. This includes the layout
of

pins
,

voltages
,

cable

specifications
,

hubs
,
repeaters
,

network adapters
,

host bus
adapters

(HBA used in

storage area networks
) and more.

The major functions and services performed by the physical layer are:



Establishment and termination of a

connection

to a

communications

medium
.



Particip
ation in the process whereby the communication resources are effectively shared
among multiple users. For example,

contention
resolution and

flow control
.



Modulation
, or conversion between the representation of

digital data

in user equipment
and the corresponding signals transmitted over a communications

channel
. These are sign
als
operating over the physical cabling (such as copper and

optical fiber
) or over a

radio link
.




Shashank Agnihotri




Computer Networks



Page
14

Parallel SCSI

buses operate in this layer, although it must be remembered that the
logical

SCSI

protocol is a transport layer protocol that runs over this bus. Various physical
-
layer
Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data link
layer. The same applies to other local
-
area networks, such as

token ring
,

FDDI
,

ITU
-
T

G.hn

and

IEEE 802.11
, as well as personal area networks such as

Bluetooth

and

IEEE
802.15.4
.


Layer 2: data link layer

The

data link layer

provides the functional and procedural means to transfer data between
network entities and to detect and possibly correct errors that may occur in the physical layer.
Originally, this layer was intended for
point
-
to
-
point and point
-
to
-
multipoint media, characteristic
of wide area media in the telephone system. Local area network architecture, which included
broadcast
-
capable multiaccess media, was developed independently of the ISO work in

IEEE
Project 802
. IEEE work assumed

sublayering

and management functions not required for WAN
use. In modern practice, only error detection
, not flow control using sliding window, is present in
data link protocols such as

Point
-
to
-
Point Protocol

(PPP), and, on local area networks, the IEEE
802.2

LLC

layer is not used for most protocols on the Ethernet, and on other local area
networks, its flow control and acknowledgment mechanisms are rarely used. Sliding
window flow
control and acknowledgment is used at the transport layer by protocols such as

TCP
, but is still
used in niches where

X.25

offers performance advantages.

The

ITU
-
T

G.hn

standard, which provides high
-
speed loc
al area networking over existing wires
(power lines, phone lines and coaxial cables), includes a complete

data link layer

which provides
both error correction and flow contro
l by means of a

selective repeat

Sliding Window Protocol
.

Both WAN and LAN

service arrange bits, from the physical layer, into logical sequences called
frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely
intended for physical layer functions. For example, every fifth bit of the

FDDI

bit stream is not
used by the layer.


WAN protocol architecture

Connection
-
oriented

WAN data link protocols
, in addition to framing, detect and may correct
errors. They are also capable of controlling the rate of transmission. A WAN data link layer might
implement a

sliding window

f
low control and acknowledgment mechanism to provide reliable
delivery of frames; that is the case for

Synchronous Data Link Control

(SDLC) and

HDLC
, and
derivatives of HDLC such as

LAPB

and
LAPD
.




Shashank Agnihotri




Computer Networks



Page
15


IEEE 802 LAN architecture

Practical,

connectionless

LANs began with the pre
-
IEEE

Ethernet

specification, which is the
ancestor of

IEEE 802.3
. This layer manages the interaction of devices with a shared medium,
which is the function of a

media access control

(MAC) sublayer. Above this MAC sublayer is the
media
-
independent

IEEE 802.2

Logical Link Control

(LLC) sublayer, which deals with
addressing and multiplexing on multiaccess media.

While IEEE 802.3 is the dominant wired LAN protocol and

IEEE 802.11

the wireless LAN
protocol, obsolescent MAC layers include

Token Ring

and

FDDI
. The MAC sublayer detects but
does not correct errors.


Layer 3: network layer

The

network layer

provides the functional and procedural means of transferring va
riable
length

data

sequences from a source host on one network to a destination host on a different
network, while maintaining the

quality of service

requested by the transport layer (in contrast to
the data link layer which connects hosts within the same network). The network layer performs
network

routing

functions, and might also perform fragmentation and reassembly, and report
delivery errors.

Routers

operate at this layer, sending data throughout the extend
ed network and
making the Internet possible. This is a logical addressing scheme


values are chosen by the
network engineer. The addressing scheme is not hierarchical.

The network layer may be divided into three sublayers:

1.

Subnetwork access


that conside
rs protocols that deal with the interface to networks,
such as X.25;

2.

Subnetwork
-
dependent convergence


when it is necessary to bring the level of a transit
network up to the level of networks on either side

3.

Subnetwork
-
independent convergence


handles tra
nsfer across multiple networks.

An example of this latter case is CLNP, or IPv7 ISO 8473. It manages
the

connectionless

transfer of data one hop at a time, fr
om end system to

ingress router
, router
to router, and from

egress router

to destination end syste
m. It is not responsible for reliable
delivery to a next hop, but only for the detection of erroneous packets so they may be discarded.
In this scheme, IPv4 and IPv6 would have to be classed with X.25 as subnet access protocols
because they carry interface

addresses rather than node addresses.

A number of layer
-
management protocols, a function defined in the Management Annex, ISO
7498/4, belong to the network layer. These include routing protocols, multicast group



Shashank Agnihotri




Computer Networks



Page
16

management, network
-
layer information and e
rror, and network
-
layer address assignment. It is
the function of the payload that makes these belong to the network layer, not the protocol that
carries them.


Layer 4: transport layer

The

transport layer

provides transparent transfer of data between end users, providing reliable
data transfer services to the upper layers. The transport layer controls the reliability of a given
link through flow control, segmentation/desegmen
tation, and error control. Some protocols are
state
-

and connection
-
oriented. This means that the transport layer can keep track of the
segments and retransmit those that fail. The transport layer also provides the acknowledgement
of the successful data tr
ansmission and sends the next data if no errors occurred.

OSI defines five classes of connection
-
mode transport protocols ranging from class 0 (which is
also known as TP0 and provides the least features) to class 4 (TP4, designed for less reliable
networks
, similar to the Internet). Class 0 contains no error recovery, and was designed for use
on network layers that provide error
-
free connections. Class 4 is closest to TCP, although TCP
contains functions, such as the graceful close, which OSI assigns to the

session layer. Also, all
OSI TP connection
-
mode protocol classes provide expedited data and preservation of record
boundaries. Detailed characteristics of TP0
-
4 classes are shown in the following table:
[4]

Feature Name

TP0

TP1

TP2

TP3

TP4

Connection oriented network

Yes

Yes

Yes

Yes

Yes

Connectionless network

No

No

No

No

Yes

Concatenation and separation

No

Yes

Yes

Yes

Yes

Segmentation and reassembly

Yes

Yes

Yes

Yes

Yes

Error Recove
ry

No

Yes

Yes

Yes

Yes

Reinitiate connection (if an excessive number of

PDUs

are
unacknowledged)

No

Yes

No

Yes

No




Shashank Agnihotri




Computer Networks



Page
17

Multiplexing and demultiplexing over a single

virtual circuit

No

No

Yes

Yes

Yes

Explicit flow control

No

No

Yes

Yes

Yes

Retransmission on timeout

No

No

No

No

Yes

Reliable Transport Service

No

Yes

No

Yes

Yes

Perhaps an easy
way to visualize the transport layer is to compare it with a Post Office, which
deals with the dispatch and classification of mail and parcels sent. Do remember, however, that
a post office manages the outer envelope of mail. Higher layers may have the equ
ivalent of
double envelopes, such as cryptographic presentation services that can be read by the
addressee only. Roughly speaking,

tunneling protocols

operate at the tr
ansport layer, such as
carrying non
-
IP protocols such as

IBM
's

SNA

or

Novell
's

IPX

over an IP network, or end
-
to
-
end
encryption with

IPsec
. While

Generic Routing Encapsulation

(GRE) might seem to be a network
-
layer protocol, if the encapsulation of t
he payload takes place only at endpoint, GRE becomes
closer to a transport protocol that uses IP headers but contains complete frames or packets to
deliver to an endpoint.

L2TP

carries

PPP

frames inside transport packet.

Although not developed under the OSI Reference Model and not strictly conforming to the O
SI
definition of the transport layer, the
Transmission Control Protocol

(TCP) and the

User Datagram
Protocol

(UDP) of the Internet Protocol Suite are commonly categorized as layer
-
4 protocols
within OSI.

Layer 5: session layer

The

session layer

controls the dialogues (connections) between computers. It establishes,
manages and terminates the connections between the local and remote application. It provides
for

full
-
duplex
,

half
-
duplex
, or

simplex

operation, and e
stablishes checkpointing, adjournment,
termination, and restart procedures. The OSI model made this layer responsible for graceful
close of sessions, which is a property of the
Transmission Control Protocol
, and also for session
checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session
layer is commonly implemented explicitly in application environments that us
e

remote procedure
calls
. On this level, Inter
-
Process_(computing)

commu
nication happen (SIGHUP, SIGKILL, End
Process, etc.).

Layer 6: presentation layer




Shashank Agnihotri




Computer Networks



Page
18

The

presentation layer

establishes context between application
-
layer entities, in whic
h the
higher
-
layer entities may use different syntax and semantics if the presentation service provides
a mapping between them. If a mapping is available, presentation service data units are
encapsulated into session protocol data units, and passed down th
e stack.

This layer provides independence from data representation (e.g.,

encryption
) by translating
between application and network formats. The presentation layer transforms data int
o the form
that the application accepts. This layer formats and encrypts data to be sent across a network. It
is sometimes called the syntax layer.
[5]

The original presentation structu
re used the basic encoding rules of

Abstract Syntax Notation
One

(ASN.1), with capabilities such as converting an

EBCDIC
-
coded text

file

to an

ASCII
-
coded
file, or

serialization

of

objects

and other

data structures

from and to

XML
.

Layer 7: application layer

The

application layer

is the OSI layer closest to

the end user, which means that both the OSI
application layer and the user interact directly with the software application. This layer interacts
with software applications that implement a communicating component. Such application
programs fall outside th
e scope of the OSI model. Application
-
layer functions typically include
identifying communication partners, determining resource availability, and synchronizing
communication. When identifying communication partners, the application layer determines the
id
entity and availability of communication partners for an application with data to transmit. When
determining resource availability, the application layer must decide whether sufficient network or
the requested communication exist. In synchronizing communic
ation, all communication
between applications requires cooperation that is managed by the application layer. Some
examples of application
-
layer implementations also include:



On OSI stack:



FTAM

Fil
e Transfer and Access Management Protocol



X.400

Mail



Common managem
ent information protocol

(CMIP)



On TCP/IP stack:



Hypertext Transfer Protocol

(HTTP),



File Transfer Protocol

(FTP),



Simple Mail Transfer Protocol

(SMTP)



Simple Network Management Protocol

(SNMP).

Cross
-
layer functions




Shashank Agnihotri




Computer Networks



Page
19


This "datagram service model" reference in MPLS

may be

confusing or unclear

to
readers
. Please help

clarify the "datagram service model" reference in MPLS
;
suggestions may be found on the

talk page

There are some functions or
services that are not tied to a given layer, but they can affect more
than one layer. Examples include the following:



security servic
e (telecommunication)
[3]

as defined by

ITU
-
T

X.800 Recommendation.



management functions, i.e. functions that permit
to configure, instantiate, monitor,
terminate the communications of two or more entities: there is a specific application layer
protocol,

common management information protocol

(CMIP) and its corresponding
service,

common management information service

(CMIS
), they need to interact with every layer
in order to deal with their instances.



Multiprotocol Label Switching

(MPLS) operates at an OSI
-
model lay
er that is generally
considered to lie between traditional definitions of layer 2 (data link layer) and layer 3 (network
layer), and thus is often referred to as a "layer
-
2.5" protocol. It was designed to provide a unified
data
-
carrying service for both ci
rcuit
-
based clients and packet
-
switching clients which provide a
datagram service model. It can be used to carry many different kinds of traffic, including IP
packets, as well as native ATM, SONET, and Ethernet frames.



ARP is used to translate IPv4 address
es (OSI layer 3) into Ethernet MAC addresses (OSI
layer 2).

Interfaces

Neither the OSI Reference Model nor OSI protocols specify any programming interfaces, other
than as deliberately abstract service specifications. Protocol specifications precisely defin
e the
interfaces between different computers, but the software interfaces inside computers, known
as

network sockets

are implementation
-
specific.

For example

Microsoft Windows
'

Winsock
, and

Unix
's

Berkeley sockets

and

System V

Transport
Layer Interface
, are interfaces between applications (layer 5 and above) and the transport (layer
4).

ND
IS

and

ODI

are interfaces between the media (layer 2) and the network protocol (layer 3).

Interface standards, except for the physical layer to media, are
approximate implementations of
OSI service specifications.

Examples




Shashank Agnihotri




Computer Networks



Page
20

Layer

OSI

protocols

TCP/IP protocols

Signal
ing
Syste
m 7
[6]

AppleTalk

IPX

SNA

UMTS

Misc. examples

#

Name

7

Application

FTAM
,

X.400
,
X.500
,

D
AP
,
ROSE
,

RTSE
,
ACS
E
[7]
CMIP
[8]

NNTP
,

SIP
,

SSI
,
DNS
,

FTP
,
Gopher
,

HTTP
,
NFS
,

NTP
,

DHCP
,
SMPP
,

SMTP
,
SNM
P
,

Telnet
,
RIP
,

BGP

INAP
,
MAP
,
T
CAP
,
I
SUP
,
T
UP

AFP
,

ZIP
,
RTMP
,

NBP

RIP
,

SAP

APPC


HL7
,

Modbus

6

Presentation

ISO/IEC

8823, X.226,
IS
O/IEC

9576
-
1, X.236

MIME
,

SSL
,

TLS
,
XDR


AFP




TDI
,

ASCII
,

EBCDIC
,
MIDI
,

MPE
G

5

Session

ISO/IEC

8327, X.225,
ISO/IEC

9548
-
1, X.235

Sockets. Session
establishm
ent in
TCP
,

RTP


ASP
,
ADSP
,
PAP

NWLink

DLC
?


Named
pipes
,

NetBIOS
,
SAP
,

half
duplex
,

full
duplex
,

simplex
,

RPC
,
SOCKS

4

Transport

ISO/IEC

8073, TP0,
TP1, TP2, TP3, TP4
(X.224), ISO/IEC

8602,
X.234

TCP
,

UDP
,

SCTP
,
DCCP



DDP
,
SPX



NBF

3

Network

ISO/IEC

8208,
X.25

(
P
LP
),
ISO/IEC

8878,
X.223
,
ISO/IEC

8473
-
1,

CLNP
X.233.

IP
,

IPsec
,

ICMP
,
IGMP
,

OS
PF

SCCP
,
MTP

ATP

(
TokenTalk
or
Ether
Talk
)

IPX


RRC

(
Radio
Resource
Control
)
Pack
et Data
Convergenc
e
Protocol

(
PD
CP
)
NBF
,

Q.931
,

IS
-
IS




Shashank Agnihotri




Computer Networks



Page
21

and

BMC
(
Br
oadcast/Mult
icast
Control
)

2

Data Link

ISO/IEC

7666,
X.25

(
L
APB
),
Token Bus
,
X.222, ISO/IEC

8802
-
2

LLC

Type 1 and 2
[9]

PPP
,

SBTV

SLIP
,
PPTP

MTP
,
Q.710

LocalTalk
,

AppleTalk Remote
Access
,
PPP

IEEE
802.3
frami
ng,
Etherne
t II framing

SDLC

LLC

(
Logical
Link
Control
),

MA
C
(
Media
Access
Control
)

802.3 (Ethernet)
,
802.11a/b/g/n
MAC/LLC
,
802.1Q
(VLAN)
,

ATM
,
HDP
,

FDDI
,

Fibre
Channel
,

Frame
Relay
,
HDLC
,

ISL
,

PPP
,

Q.921
,
T
oken
Ring
,

CDP
,

NDP
ARP

(maps
layer 3 to layer 2 address),

ITU
-
T
G
.hn DLL

CRC
,

Bit stuffing
,

ARQ
,
Data
Over Cable Service Interface
Specification
(DOCSIS)
,

interface bonding

1

Physical

X.25

(
X.21bis
,
EIA/TIA
-
232
,
EIA/TIA
-
449
,
EIA
-
530
,
G.703
)
[9]


MTP
,
Q.710

RS
-
232
,
RS
-
422
,
STP
,

PhoneNet


Twina
x

UMTS
Physical
layer or L1

RS
-
232
,

Full
duplex
,
RJ45
,

V.35
,

V.34
,

I.430
,
I.
431
,

T1
,

E1
,

10BASE
-
T
,

100BASE
-
TX
,

POTS
,
SONET
,

SDH
,

DSL
,
8
02.11a/b/g/n PHY
,

ITU
-
T G.hn
PHY
,

Controller Area
Network
,

Data Over Cable
Service Interface Specification
(DOCSIS)




Shashank Agnihotri




Computer Networks



Page
22

Comparis
on with TCP/IP model

In the

TCP/IP model

of the Internet, protocols are deliberately not as rigidly designed into strict
layers as in the OSI model.
[10]

RFC 3439
contains a section entitled "Layering

considered
harmful
." Howev
er, TCP/IP does recognize four broad layers of functionality which are derived
from the operating scope of their contained protocols, namely the scope of the software
application, the end
-
to
-
end transport connection, the internetworking range, and the scop
e of the
direct links to other nodes on the local network.

Even though the concept is different from the OSI model, these layers are nevertheless often
compared with the OSI layering scheme in the following way: The Internet

application
layer

includes the OSI application layer, presentation layer, and most of the session layer. Its
end
-
to
-
end

transport layer

includes the graceful close function of the OSI session layer as well
as the OSI transport layer. The internetworking layer (
Internet layer
) is a subset of the

OSI
network layer (see above), while the

link layer

includes the OSI data link and physical layers, as
well as parts of OSI's network layer. These comparisons are based on the origina
l seven
-
layer
protocol model as defined in ISO 7498, rather than refinements in such things as the internal
organization of the network layer document.

The presumably strict peer layering of the OSI model as it is usually described does not present
contrad
ictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy
implied in a layered model. Such examples exist in some routing protocols (e.g., OSPF), or in
the description of

tunneling protocols
, which provide a link layer for an application, although the
tunnel host protocol may well be a transport or even an application layer protocol in its own right.






Shashank Agnihotri




Computer Networks



Page
23

Data Link Layer (Layer 2)



The seco
nd
-
lowest layer (layer 2) in the OSI Reference Model stack is the

data link layer
, often
abbreviated “DLL” (though that abbreviation has other meanings as well in the computer world).
The data link layer, also sometimes just called the

link layer
, is where

many wired and wireless
local area networking (LAN) technologies primarily function. For example, Ethernet, Token Ring,
FDDI and 802.11 (“wireless Ethernet” or “Wi
-
Fi’) are all sometimes called “data link layer
technologies”. The set of devices connected
at the data link layer is what is commonly
considered a simple
“network”, as opposed to an internetwork
.

Data Link Layer Sublayers: Logical Link Control (LLC) a
nd Media Access Control (MAC)

The data link layer is often conceptually divided into two sublayers:

logical link control
(LLC)

and

media access control (MAC)
. This split is based on the architecture used in the IEEE
802 Project, which is the IEEE working g
roup responsible for creating the standards that define
many networking technologies (including all of the ones I mentioned above except FDDI). By
separating LLC and MAC functions, interoperability of different network technologies is made
easier, as expla
ined in our earlier discussion of networking model concepts.

Data Link Layer Functions

The following are the key tasks performed at the data link layer:

o

Logical Link Control (LLC):

Logical link control refers to the functions required for the
establishment

and control of logical links between local devices on a network. As mentioned
above, this is usually considered a DLL sublayer; it provides services to the network layer above
it and hides the rest of the details of the data link layer to allow different
technologies to work
seamlessly with the higher layers. Most local area networking technologies use the IEEE 802.2
LLC protocol.


o

Media Access Control (MAC):

This refers to the procedures used by devices to control
access to the network medium. Since many
networks use a shared medium (such as a single
network cable, or a series of cables that are electrically connected into a single virtual medium) it
is necessary to have rules for managing the medium to avoid conflicts. For example. Ethernet
uses the CSMA/
CD method of media access control, while Token Ring uses token passing.


o

Data Framing:

The data link layer is responsible for the final encapsulation of higher
-
level messages into

frames
that are sent over the network at the physical layer.


o

Addressing:

The

data link layer is the lowest layer in the OSI model that is concerned
with addressing: labeling information with a particular destination location. Each device on a
network has a unique number, usually called a
hardware address

or

MAC address
, that is use
d
by the data link layer protocol to ensure that data intended for a specific machine gets to it
properly.


o

Error Detection and Handling:

The data link layer handles errors that occur at the lower
levels of the network stack. For example, a cyclic redundan
cy check (CRC) field is often
employed to allow the station receiving data to detect if it was received correctly.




Shashank Agnihotri




Computer Networks



Page
24


Physical Layer Requirements Definition and Network Interconnection Device Layers

As I mentioned in

the topic discussing the physical layer
, that layer and the data link layer are
very closely related. The requirements for the physical layer of a network are often part of the
data link layer definition of a particular tech
nology. Certain physical layer hardware and encoding
aspects are specified by the DLL technology being used. The best example of this is the
Ethernet standard, IEEE 802.3, which specifies not just how Ethernet works at the data link
layer, but also its var
ious physical layers.

Since the data link layer and physical layer are so closely related, many types of hardware are
associated with the data link layer. Network interface cards (NICs) typically implement a specific
data link layer technology, so they are

often called “Ethernet cards”, “Token Ring cards”, and so
on. There are also a number of network interconnection devices that are said to “operate at layer
2”, in whole or in part, because they make decisions about what to do with data they receive by
loo
king at data link layer frames. These devices include most bridges, switches and barters,
though the latter two also encompass functions performed by layer three.

Some of the most popular technologies and protocols generally associated with layer 2 are
Eth
ernet, Token Ring, FDDI (plus CDDI), HomePNA, IEEE 802.11, ATM, and TCP/IP's Serial
Link Interface Protocol (SLIP) and Point
-
To
-
Point Protocol (PPP).

Key Concept:

The second OSI Reference Model layer is the

data link layer
. This is the
place where most LA
N and wireless LAN technologies are defined. Layer two is responsible
for logical link control, media access control, hardware addressing, error detection and
handling, and defining physical layer standards. It is often divided into the logical link contro
l
(LLC) and media access control (MAC) sublayers, based on the IEEE 802 Project that uses that
architecture.


The Data
-
Link layer is the

protocol

layer

in a program that handles the moving of data in and out
across a physical link in a network. The Data
-
Link layer is layer 2 in the Open Systems
Interconnect (
OSI
) model for a set of telecommunication protocols.

The Data
-
Link layer contains two sublayers that are described in the IEEE
-
802 LAN standards:



Media Access Control (MAC)



Logical Link Control (LLC)

The Data
-
Link layer ensures that an

initial connection has been set up, divides output data into
data frames, and handles the acknowledgements from a receiver that the data arrived
successfully. It also ensures that incoming data has been received successfully by analyzing bit
patterns at s
pecial places in the frames.




Shashank Agnihotri




Computer Networks



Page
25

Physical Layer (Layer 1)



The lowest layer of the OSI Reference Model is layer 1, the

physical layer
; it is commonly
abbreviated “PHY”. The physical layer is special compared to the other layers of the model,
because it is th
e only one where data is physically moved across the network interface. All of the
other layers perform useful functions to create messages to be sent, but they must all be
transmitted down the protocol stack to the physical layer, where they are actually
sent out over
the network.

Note:

The physical layer is also “special” in that it is the only layer that really does not
apply 獰e捩fi捡lly to qCmLfm. bven in 獴udying qCmLfmI howeverI it i猠獴ill important to
under獴and it猠獩gnifi捡n捥 and role in relati
on to the other layer猠where qCmLfm proto捯l猠
re獩de.


Understanding the Role of the Physical Layer

The name “physical layer” can be a bit problematic. Because of that name, and because of what
I just said about the physical layer actually transmitting da
ta, many people who study networking
get the impression that the physical layer is only about actual network hardware. Some people
may say the physical layer is “the network interface cards and cables”. This is not actually the
case, however. The physical
layer defines a number of network functions, not just hardware
cables and cards.

A related notion is that “all network hardware belongs to the physical layer”. Again, this isn't
strictly accurate. All hardware must have

some

relation to the physical layer
in order to send
data over the network, but hardware devices generally implement multiple layers of the OSI
model, including the physical layer but also others. For example, an Ethernet network interface
card performs functions at both the physical layer a
nd the data link layer.

Physical Layer Functions

The following are the main responsibilities of the physical layer in the OSI Reference Model:

o

Definition of Hardware Specifications:

The details of operation of cables, connectors,
wireless radio transceiver
s, network interface cards and other hardware devices are generally a
function of the physical layer (although also partially the data link layer; see below).


o

Encoding and Signaling:

The physical layer is responsible for various encoding and
signaling fun
ctions that transform the data from bits that reside within a computer or other device
into signals that can be sent over the network.


o

Data Transmission and Reception:

After encoding the data appropriately, the physical
layer actually transmits the data,
and of course, receives it. Note that this applies equally to
wired and wireless networks, even if there is no tangible cable in a wireless network!


o

Topology and Physical Network Design:

The physical layer is also considered the
domain of many hardware
-
re
lated network design issues, such as LAN and WAN topology.




Shashank Agnihotri




Computer Networks



Page
26

In general, then, physical layer technologies are ones that are at the very lowest level and deal
with the actual ones and zeroes that are sent over the network. For example, when considering
netwo
rk interconnection devices, the simplest ones operate at the physical layer: repeaters,
conventional hubs and transceivers. These devices have absolutely no knowledge of the
contents of a message. They just take input bits and send them as output. Devices
like switches
and routers operate at higher layers and look at the data they receive as being more than
voltage or light pulses that represent one or zero.

Relationship Between the Physical Layer and Data Link Layer

It's important to point out that while t
he physical layer of a network technology primarily defines
the hardware it uses, the physical layer is closely related to the data link layer. Thus, it is not
generally possible to define hardware at the physical layer “independently” of the technology
be
ing used at the data link layer. For example, Ethernet is a technology that describes specific
types of cables and network hardware, but the physical layer of Ethernet can only be isolated
from its data link layer aspects to a point. While Ethernet cables
are “physical layer”, for
example, their maximum length is related closely to message format rules that exist at the data
link layer.

Furthermore, some technologies perform functions at the physical layer that are normally more
closely associated with the
data link layer. For example, it is common to have the physical layer
perform low
-
level (bit level) repackaging of data link layer frames for transmission. Error
detection and correction may also be done at layer 1 in some cases. Most people would
consider

these “layer two functions”.

In many technologies, a number of physical layers can be used with a data link layer. Again
here, the classic example is Ethernet, where dozens of different physical layer implementations
exist, each of which uses the same dat
a link layer (possibly with slight variations.)

Physical Layer Sublayers

Finally, many technologies further subdivide the physical layer into

sublayers
. In order to
increase performance, physical layer encoding and transmission methods have become more
com
plex over time. The physical layer may be broken into layers to allow different network
media to be supported by the same technology, while sharing other functions at the physical
layer that are common between the various media. A good example of this is t
he physical layer
architecture used for Fast Ethernet, Gigabit Ethernet and 10
-
Gigabit Ethernet.

Note:

In some contexts, the physical layer technology used to convey bits across a
network or communications line is called a

transport method
. Don't confuse
this with the
functions of the

OSI transport layer (layer 4)
.


Key Concept:

The lowest layer in the OSI Reference Model is the

physical layer
. It is the
realm of networking hardwar
e specifications, and is the place where technologies reside that
perform data encoding, signaling, transmission and reception functions. The physical layer is
closely related to the data link layer.





Shashank Agnihotri




Computer Networks



Page
27

The ALOHA protocol

Pure ALOHA



Pure ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which have
collided.

The first version of the protocol (now called

"Pure ALOHA", and the one implemented in
ALOHAnet) was quite simple:



If you have data to send, send the data



If the message collides with another transmission, try resending "later"

Note that the first step implies that Pure ALOHA does not check whether t
he channel is busy
before transmitting. The critical aspect is the "later" concept: the quality of the backoff scheme
chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and
the predictability of its behavior.

To
assess Pure ALOHA, we need to predict its throughput, the rate of (successful) transmission
of frames. (This discussion of Pure ALOHA's performance follows Tanenbaum.
[9]
) First, le
t's
make a few simplifying assumptions:



All frames have the same length.



Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a
station keeps trying to send a frame, it cannot be allowed to generate more frames to send.)



The population of stations attempts to transmit (both new frames and old frames that
collided) according to a

Poisson distribution
.

Let "
T
" refer to the time needed

to transmit one frame on the channel, and let's define "frame
-
time" as a unit of time equal to

T
. Let "
G
" refer to the mean used in the Poisson distribution over
transmission
-
attempt amounts: that is, on average, there are

G

transmission
-
attempts per fram
e
-
time.




Shashank Agnihotri




Computer Networks



Page
28



Overlapping frames in the pure ALOHA protocol. Frame
-
time is equal to 1 for all frames.

Consider what needs to h
appen for a frame to be transmitted successfully. Let "
t
" refer to the
time at which we want to send a frame. We want to use the channel for one frame
-
time
beginning at

t
, and so we need all other stations to refrain from transmitting during this time.
Mor
eover, we need the other stations to refrain from transmitting between

t
-
T

and

t

as well,
because a frame sent during this interval would overlap with our frame.

For any frame
-
time, the probability of there being

k

transmission
-
attempts during that frame
-
t
ime
is:




Comparison of Pure Aloha and Slotted Aloha shown on Throughput vs. Traffic Load plot.

The average amount of transmission
-
attempts for 2 consecutive frame
-
times is 2
G
. Hence, for
any pair of consecutive frame
-
times, the probability of there being

k
transmission
-
attempts during
those two frame
-
times is:


Therefore, the probability (
) of there being zero transmission
-
attempts between

t
-
T

and

t+T

(and thus of a successful transmission for us) is:





Shashank Agnihotri




Computer Networks



Page
29

The throughput can be calculated as the rate of transmission
-
attempts multiplied by the
probability of success, and so we can conclude that the throughput (
) is:


The maxim
um throughput is

0.5/e

frames per frame
-
time (reached when

G

= 0.5), which is
approximately 0.184 frames per frame
-
time. This means that, in Pure ALOHA, only about 18.4%
of the time is used for successful transmissions.

Slotted ALOHA



Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are in the
same slots.

An improvement to the ori
ginal ALOHA protocol was "Slotted ALOHA", which introduced discrete
timeslots and increased the maximum throughput.
[10]

A station can send only at the beginning of
a timeslot, and thus
collisions are reduced. In this case, we only need to worry about the
transmission
-
attempts within 1 frame
-
time and not 2 consecutive frame
-
times, since collisions
can only occur during each timeslot. Thus, the probability of there being zero transmission
-
attempts in a single timeslot is:


the probability of k packets is:


The throughput is:


The maximum throughput is

1/e

fra
mes per frame
-
time (reached when

G

= 1), which is
approximately 0.368 frames per frame
-
time, or 36.8%.




Shashank Agnihotri




Computer Networks



Page
30

Slotted ALOHA is used in low
-
data
-
rate tactical

satell
ite communications

networks by military
forces, in subscriber
-
based satellite communications networks, mobile telephony call setup, and
in the contactless

RFID

technologies.


Other Protocols

The
use of a random access channel in ALOHAnet led to the development of

Carrier Sense
Multiple Access

(CSMA), a 'listen before send' random access pr
otocol which can be used when
all nodes send and receive on the same channel. The first implementation of CSMA
was
Ethernet
, and CSMA was extensively modeled in.
[11]

ALOHA and the other random
-
access protocols have an inherent variability in their throughput
and delay performance characteristics. For this reason, applications which need highly
deterministic load behavior

often used polling or token
-
passing schemes (such as

token ring
)
instead of

contention systems
. For instance

ARCNET

was popular in embedded data
applications in the 1980s.


Design

Network architecture

Two fundamental choices which dictated much of the ALOHAnet
design were the two
-
channel
star configuration of the network and the use of random accessing for user transmissions.

The two
-
channel configuration was primarily chosen to allow for efficient transmission of the
relatively dense total traffic stream being
returned to users by the central time
-
sharing computer.
An additional reason for the star configuration was the desire to centralize as many
communication functions as possible at the central network node (the Menehune), minimizing
the cost of the original

all
-
hardware terminal control unit (TCU) at each user node.

The random access channel for communication between users and the Menehune was
designed specifically for the traffic characteristics of interactive computing. In a conventional
communication syst
em a user might be assigned a portion of the channel on either a frequency
-
division multiple access (FDMA) or time
-
division multiple access (TDMA) basis. Since it was well
known that in time
-
sharing systems [circa 1970], computer and user data are bursty,
such fixed
assignments are generally wasteful of bandwidth because of the high peak
-
to
-
average data
rates that characterize the traffic.

To achieve a more efficient use of bandwidth for bursty traffic, ALOHAnet developed the random
access packet switching
method that has come to be known as a

pure ALOHA

channel. This
approach effectively dynamically allocates bandwidth immediately to a user who has data to



Shashank Agnihotri




Computer Networks



Page
31

send, using the acknowledgment/retransmission mechanism described earlier to deal with
occasional acce
ss collisions. While the average channel loading must be kept below about 10%
to maintain a low collision rate, this still results in better bandwidth efficiency than when fixed
allocations are used in a bursty traffic context.

Two 100

kHz channels in the
experimental UHF band were used in the implemented system,
one for the user
-
to
-
computer random access channel and one for the computer
-
to
-
user
broadcast channel. The system was configured as a star network, allowing only the central node
to receive transmi
ssions in the random access channel. All user TCUs received each
transmission made by the central node in the broadcast channel. All transmissions were made in
bursts at 9600 bit/s, with data and control information encapsulated in packets.

Each packet con
sisted of a 32
-
bit header and a 16
-
bit header parity check word, followed by up
to 80 bytes of data and a 16
-
bit parity check word for the data. The header contained address
information identifying a particular user so that when the Menehune broadcast a pa
cket, only the
intended user's node would accept it.


Remote units

The original user interface developed for the system was an all
-
hardware unit called an
ALOHAnet Terminal Control Unit (TCU), and was the sole piece of equipment necessary to
connect a term
inal into the ALOHA channel. The TCU was composed of a UHF antenna,
transceiver, modem, buffer and control unit. The buffer was designed for a full line length of 80
characters, which allowed handling of both the 40 and 80 character fixed
-
length packets de
fined
for the system. The typical user terminal in the original system consisted of a

Teletype Model
33

or a dumb CRT user terminal connected to the TCU using a standard

RS
-
232C

interface.
Shortly after the original ALOHA network went into operation, the TCU was redesigned with one
of the first Intel microprocessors, and the resulting upgrade was called a PCU (P
rogrammable
Control Unit).

Additional basic functions performed by the TCU's and PCU’s were generation of a cyclic
-
parity
-
check code vector and decoding of received packets for packet error
-
detection purposes, and
generation of packet retransmissions using

a simple random interval generator. If an
acknowledgment was not received from the Menehune after the prescribed number of automatic
retransmissions, a flashing light was used as an indicator to the human user. Also, since the
TCU's and PCU’s did not send

acknowledgments to the Menehune, a steady warning light was
displayed to the human user when an error was detected in a received packet. Thus it can be
seen that considerable simplification was incorporated into the initial design of the TCU as well
as th
e PCU, making use of the fact that it was interfacing a human user into the network.





Shashank Agnihotri




Computer Networks



Page
32

The Menehune

The central node communications processor was an

HP 2100

minicomputer called the
Menehune,
which is the

Hawaiian language

word for “imp”, or dwarf people,
[12]

and was named
for its similar
role to the original

ARPANET

Interface Message Processor

(IMP) which was being
deplo
yed at about the same time. In the original system, the Menehune forwarded correctly
-
received user data to the UH central computer, an IBM System 360/65 time
-
sharing system.
Outgoing messages from the 360 were converted into packets by the Menehune, which
were
queued and broadcast to the remote users at a data rate of 9600 bit/s. Unlike the half
-
duplex
radios at the user TCUs, the Menehune was interfaced to the radio channels with full
-
duplex
radio equipment.


Later developments

In later versions of the sys
tem, simple radio relays were placed in operation to connect the main
network on the island of Oahu to other islands in Hawaii, and Menehune routing capabilities
were expanded to allow user nodes to exchange packets with other user nodes, the
ARPANET
,
and an experimental satellite network. More details are available in

[3]

and in the technical
reports listed in the
Further Reading section below.




Shashank Agnihotri




Computer Networks



Page
33

Carrier sense multiple access

Carrier Sense Multiple Access

(
CSMA
) is a

probabilistic

Media Access Control

(MAC) protocol
in which a node verifies the absence of other

traffic

before

transmitting

on a shared

transmission
medium
, such as an electrical bus, or a band of the

electromagnetic spectrum
.

"
Carrier Sense
" describes the fact that a

transmitter

uses

feedback

from a receiver that detects
a

carrier wave

before trying to send. That is, it tries to detect the pres
ence of an
encoded

signal

from another station before attempting to transmit. If a carrier is sensed, the
station waits for the transmission i
n progress to finish before initiating its own transmission. In
other words, CSMA is based on the principle "sense before transmit" or "listen before talk".

"
Multiple Access
" describes the fact that multiple stations send and receive on the medium.
Transmi
ssions by one node are generally received by all other stations using the medium.

Contents



[
hide
]




1

Protocol modifications



2

CSMA access modes



3

References



4

See also

[
edit
]
Protocol modifications

Carrier sense multiple access with collision detection

(
CSMA/CD) is a modification of CSMA.
CSMA/CD is used to improve CSMA performance by terminating transmission as soon as a
collision is detected, and reducing the probability of a second collision on retry.

Carrier sense multiple access with collision avoidance

(CSMA/CA) is a modification of CSMA.
Collision avoidance is used to improve the performance
of CSMA by attempting to be less
"greedy" on the channel. If the channel is sensed busy before transmission then the
transmission is deferred for a "random" interval. This reduces the probability of collisions on the
channel.

[
edit
]
CSMA access modes

1
-
persistent


When the sender (station) is ready to transmit data, it checks if the physical medium is busy. If
so,

it senses the medium continually until it becomes idle, and then it transmits a piece of data
(a

frame
). In case of a

collision
, the sender waits for a

random

period of time and attempts to
transmit again. 1
-
persistent CSMA

is used in CSMA/CD systems including

Ethernet
.




Shashank Agnihotri




Computer Networks



Page
34

P
-
persistent


This is a sort of trade
-
off between 1 and non
-
persistent CSMA access modes. When the sender
is ready to send data, it checks c
ontinually if the medium is busy. If the medium becomes idle,
the sender transmits a frame with a

probability

p
. If the station chooses not to transmit (the
probability of this event

is

1
-
p
), the sender waits until the next available

time slot

and transmits
again with the same probability