Page 1 of 26

tastefulsaintregisΔίκτυα και Επικοινωνίες

27 Οκτ 2013 (πριν από 4 χρόνια και 16 μέρες)

115 εμφανίσεις

Raamkanna Saranathan

Page
1

of
26

Distributed Computing Question Bank


1)

What are some typical tasks performed in the transfer of a file between two computers
attached to a network?


There must be a data path between the two computers, either directly or by a communication
network. Typica
l tasks to be performed are:


1)

The source system must either activate the direct data communication path or inform the
communication network of the identity of the desired destination system.

2)

The source system must ascertain that the destination syste
m is prepared to receive data.

3)

The file transfer application on the source system must ascertain that the file management
program on the destination system is prepared to accept and store the file for this particular
user.

4)

If the file formats used on

the two systems are incompatible, one or the other system must
perform a format translation function.


2)

Suggest a way in which a file transfer facility could be implemented for computers
attached to a network.






3)

What is the motivation or need for a

communication architecture?

The motivation for communication architecture was to promote interoperability by creating a
guideline for network data transmissions between computers that have different hardware vendors,
software, operating systems and protoc
ols.


4)

Explain the role of network, transport and application layers in the communication
architecture. What are service access points?


Network:
The software used at this layer depends on the type of network and the standard
corresponding to the type of

circuit switching or packet switching etc.



Transport:
The mechanisms for providing reliability in communication are dependent of the nature
of the applications. These mechanisms are collected in a common layer called the transport layer.



Application:

Consists of logic needed to support various user applications. For each type of
application, a separate module is needed.




Service Access Points:
For successful communication, two levels of addressing is required. Each
computer on the network has a uniq
ue address to allow the network to deliver data to the right
computer. Each application on a computer must have a unique address within that computer which
allows the transport layer to deliver data to the right application. These latter addresses are know
n
as service access points.



5)

What are protocols? What information does a protocol specification provide?

A protocol is a set of rules governing the way in which two entities co
-
operate to exchange data.


A protocol specification details the control fu
nctions that may be performed, the formats and
control codes used to communicate those functions and the procedures that the two entities must
follow.


6)

An application associated with a service access point at one computer wishes to send a
message to ano
ther application associated with a service access point at another
computer in the network. Describe the operation indicating the part played by protocol
data units.


7)

What information is contained in the transport and network protocol data units?


Trans
port PDU:

Raamkanna Saranathan

Page
2

of
26



Destination SAP



Sequence Number



Error detecting code


Network PDU:



Destination Computer Address



Request of network services


8)

A file transfer module in computer X is transferring a file one record at a time to
computer Y. Each record is handed
over to the transport layer module. Picture this action
as being in the form of a procedure call and trace it.


9)

What is Open
-
Systems Interconnection based on? What is it concerned with? What is its
objective? What does the term “open” denote?

The OSI is

based on the IOS protocol model designed to promote interoperability by creating a
guideline for network data transmissions between computers that have different hardware vendors,
software, operating systems and protocols.


The OSI is concerned with info
rmation exchange with a pair of open systems and not with the
internal functioning of each individual system.


The objective of the OSI is to define a set of standards that will enable open systems located
anywhere in the world to cooperate by being inter
connected through some standardized
communications facility and by executing standardized OSI protocols.


The term “open” in OSI denotes the ability of any two systems confirming to the OSI model and
the associated standards to communicate.


10)

List the s
even OSI layers and briefly define their roles.


Application Layer


Presentation Layer


Session Layer


Transport Layer


Network Layer


Data Link Layer


Physical Layer



Application Layer

The application layer refers to a set of rules that an application ca
n use to accomplish a task such a
word processor application requesting a file transfer. This layer is responsible for defining how
interactions occur between applications and the network. Services that function at the application
layer include file, print

and messaging services. The application layer may also support error
recovery.


Presentation Layer

The presentation layer is responsible for formatting data exchange. In this layer, character sets are
converted, and data is encrypted. Data may also be com
pressed in this layer, and this layer usually
handles the redirection of data streams.



Session Layer

The session layer defines how two computers establish, synchronize, maintain and end a session.
Practical functions such as security authentication, conn
ection ID establishment, data transfer,
acknowledgements, and connection release take place here. Any communications that require
milestones (require an answer to “Have you got the data that I sent?”) are performed here. These
milestones are called checkpo
ints.

Raamkanna Saranathan

Page
3

of
26

Once a checkpoint has been crossed, any data not received needs retransmission only from the last
good checkpoint. Adjusting checkpoints to account for very reliable or unreliable connections can
greatly improve the actual throughput of data transmis
sion.



Transport Layer

The mechanisms used by the transport protocol are very similar to those used by data link control
protocols such as HDLC. The use of sequence numbers, error detecting code and retransmission
after timeout. The reason for this duplic
ation is that the data link layer deals with only a single
direct link whereas the transport layer deals with a chain of network nodes and links although each
link in that chain is reliable because of time. It is the transport protocol that addresses this
problem.



Network Layer

The network layer provides for the transfer of information between computers across some
communication networks. It relieves higher layers of the need to know everything about the
underlying data transmission and switching technolo
gies used to connect systems. The network
service establishes, maintains and determinates connections across the intervening network. In
this layer the computer system engages in a dialog with the network to specify the destination
address and to request c
ertain network facilities such as priority.



Data Link Layer

Makes the physical link reliable. It provides the means to activate, maintain and deactivate the
link. It provides error detection and error control. Data is transmitted in blocks called frames.

Each
frame consists of a header and a tailer and an optional data field. The header and the tailer obtain
control info used to manage the link.




Physical Layer

Specifies the mechanical, electrical, functional and procedural requirements for the interfac
e but a
data transmission device and a transmission medium.



It is also concerned with transmitting the 0s and 1s and state how many volts to be used for 0 and
1. The key issues in the physical layer depicts how many bits per second can be sent, and wheth
er
transmission can take place in both directions simultaneously.




11)

How does the concept of layering contribute to the process of standardization?

Layering contributes to the proces
s of standardization because it is prepared t
o communicate with
any other open system by using standard rules that govern the format, c
ontents, and meaning of
messages sent and rec
eived. These rules are formalized into wh
at a
re called protocols.

A protocol
is an agreement between the commu
nicating pa
rties and
how communication is to proceed.


12)

What is HDLC? What are the three modes of
operation defined with it?

HDLC (Higher Level Data Link Control) is an ISO standard used on a computer
-
to
-
computer and
computer
-
to
-
terminal links.



Modes of operation are:

Normal Response Mode:
For point
-
to
-
point or multipoint links, including primary and

secondary
system. The primary indicates and polls the secondary.


Asynchronous Response Mode:
Similar to normal response but secondary can respond without
being asked.


Asynchronous Balanced Mode:
Exclusively on point
-
to
-
point link. Each can play primary
and
secondary. Each can initiate.


13)

What are frames? Describe the three types of frames used along with their control field
format.

Raamkanna Saranathan

Page
4

of
26

A frame is a term for the unit of data transferred on a LAN. It is something larger than a cell, i.e.
depending on the ty
pe of LAN, hundreds or thousands of bytes long (any particular type of LAN will
have a limit on the frame size, e.g. Ethernet's 1500 byte/octet limit). Roughly equivalent to the
term packet, but "frame" is a LAN term whereas "packet" is a term used for hig
her
-
level protocols
such TCP/IP, IPX, and AppleTalk.


Information Frames:


Sequentially Numbered

Carry Data, message acknowledgements, poll and final bits.


Supervisory Frames:

Perform link supervisory control

Message acknowledgements

Retransmit request
s

Signal temporary hold on receipt on I frames (if secondary is busy)


Unnumbered Frames:


Provide flexible format for additional link control

Do not have sequence numbers


14)

What is piggybacked acknowledgment? What are the four supervisory frames?

Whe
n a station receives a valid information, it may acknowledge that frame the next time it sends
its own information frame by sending the N(R) field (Called the Received Sequence Number) to the
number that the next frame that expects to receive. This is know
n as piggybacked
acknowledgement.


Four supervisory frames are:

Receive Ready (RR):

Used to acknowledge correct receipt of information frames up to but not
including N(R).


RNR (Receive Not Ready):

Used to indicate temporary busy condition.


REJ (Reject):

Used to indicate an error in information frame N(R) requesting a retransmission of
that and all subsequent frames.


SREJ (Selective Reject):

Used to request retransmission of a single frame.


15)

What is pipelining? How is it useful?

Pipelining is the ove
rlapping of the execution of two or more operations.


Pipelining is used with processors by prefetching instructions on the assumption that no branches
are going to stop their execution; in vector processors, in which application of a single operation to
the elements of a vector or vectors may be pipelined to decrease the time needed to complete the
aggregate operation; and in multiprocessors and multicomputers, in which a process may send a
request for values before it reaches the computation that require
s them.


16)

Explain the use of sequence numbers.

The use of sequence numbers accomplishes the following:


Flow Control:

Once a station has sent 7 frames, it can send no more until the 1
st

frame has been
acknowledged. This prevents the sender from overwhel
ming the receiver.


Error Control:
If a frame is received in error, a station can send a REJ or SREJ to specify that the
frame was received in error.


Pipelining:
More than one frame may be in transit at a time. This allows efficient use of links with
a hi
gh propagation delay e.g. satellite links.

Raamkanna Saranathan

Page
5

of
26


17)

What is sliding
-
window protocol? Explain with diagrams.

Sliding window protocol is used to obtain high throughput rates. The sender and receiver are
programmed to use a fixed window size


which is the maximu
m amount of data that can be sent
before an acknowledgment arrives.


For example the sender and receiver might agree on a window size of four packets. The sender
begins with the data to be sent, extracts the data to fill the first window, and transmits a c
opy. If
reliability is needed the sender retains a copy in case retransmission is needed. The receiver must
have buffer space ready to receive the entire window. When a packet arrives in sequence, the
receiver passes the packet to the receiving application

and transmits an ACK to the sender. When
the ACK reaches the sender discards its copy of the ACK packet and sends the next packet.




Caption:

A 4
-
packet window sliding through outgoing data
. The window is shown (a) when
transmission begins, (b) after two packets have been acknowledged, and (c) after eight packets
have been acknowledged. The sender can transmit all packets in the window.


Raamkanna Saranathan

Page
6

of
26


Caption:
Messages required to send a sequence of four packets using (a) stop
-
and
-
go flow control,
and (b) a 4
-
packet sliding window. Time proceeds down the page, and each arrow shows one
message sent from one computer to the other.


18)

Compare

the communication architectures of OSI and TCP/IP.


OSI divides network communications into seven connected layers, but TCP/IP uses the Department
of Defense (DoD) model which uses four layers.



Raamkanna Saranathan

Page
7

of
26




19)

Briefly explain the operation of TCP and IP.

TCP
i
s

the transmission layer of the protocol that serves the purpose of reliable end to end
transmission. TCP breaks data into pieces wrapping it with information needed to route it to its
destination and reassembly it at its destination. These data pieces are

called Datagrams




Source Port
The port number on the transmitting computer which corresponds to the process
(application program) that generated the data.


Destination Port
The port number on the receiving computer which corresponds to the process
that

should receive the data


Sequence Number
When TCP makes Datagrams, they are assigned a sequence number for the
receiving computer to use and know how to reassemble the Datagrams to get the original data. So
the sequence numbers are simply sequence numbers



telling the order in which the Datagrams
must be reassembled.


Acknowledgment Number
Indicates that the datagram was received successfully. If the
datagram was damaged the receiver throws the data away and does not send an ACK to the
sender. After a pre
defined time the sender resends the datagram for which no ACK was received.


Offset
Specifies the length of the header


Reserved

Set aside for future use.


Flags

Indicates end or data or urgent data

Raamkanna Saranathan

Page
8

of
26


Window

Provides a way to increase or decrease packet siz
e


Urgent Pointer

Pointer to urgent data


Options

For future use or special options defined by TCP


Padding

Ensures the header ends on a 32 bit boundary


Checksum

Error detection


data is treated as integers and added


to total is placed as a
checksum


I
P

IP is
the network layer of the protocol and is responsible for routing. IP is connectionless hence
does not use handshakes.




Version
defines the IP version number. Version 4 is the current standard, and values of 5 and 6
indicate that special protocol
s are being used. IP Version 6 is currently supported by the newest
equipment and is quickly becoming the new standard.


IHL (Internet Header Length)
Defines the length of the header information. The header length
can vary; the default header is five 32
-
bi
t words, and the sixth word is optional.


TOS (Type of Service)
Indicates the kind of priority of the required service.


Total Length

Specifies the total length of the datagram, which can be a minimum of 576 bytes
and a maximum of 65,536 bytes.


Identifica
tion

Provides information that the receiving system can use to reassemble fragmented
datagrams.


Flags

The first flag bit specifies that the datagram should not be fragmented and must therefore
travel over subnetworks that can handle the size without fragm
enting it; the second flag bit
indicates that the datagram is the last of a fragmented packet.


Fragmentation Offset

indicates the original position of the data and is used during reassembly.


Time To Live

Originally, the time in seconds that the datagram
could be in transit; if this time was
exceeded, the datagram was considered lost.


20)

What is client/server computing? What are the essential characteristics of the
client/server environment?


A Client server environment is defined by three components

Raamkanna Saranathan

Page
9

of
26

Cl
ients

Clients are usually single user PC or Workstations that provide users with an easy to use GUI such
as the ones provided by Microsoft and Macintosh.

Servers

The servers are usually more powerful computers that have plenty hardware resources and runs
o
perating systems that facilitate the provision of services for clients.

Network

The network provides the linkage between the clients and servers.



Characteristics of a client



Client is invoked directly by the user and is executed for one session.



Resides
on user’s local machine.



Actively initiates contact with the server.



A single client can access multiple services when needed but actively contacts one remote
server at a time.



Does not require special hardware or a sophisticated OS.


Characteristics of a
server



A server program is always dedicated to provide one service but can handle multiple clients
at the same time.



Invoked automatically when a system boots and continues to execute through many
sessions.



Runs on a shared computer.



Server program waits p
assively for contact from remote clients.



Requires powerful hardware and a sophisticated OS.


21)

How does the client/server configuration differ from any other distributed processing
solution?


A client/server is unique in the following ways:




Great empha
sis is placed on making the clients user friendly and easily customized.




There is an emphasis on centralized corporate databases (even though applications are
dispersed) Enabling management to maintain better control over computer investments. Relives
dep
artment of overhead involved in maintaining a database.




There is commitment between user organizations and vendors to open modular systems. This
gives users great choice in selecting products from different vendors.




Because networking is paramount to the

operation of client/server networking


network
management and security have a high priority in organizing and operating information systems.


22)

Describe a generic client/server architecture. What is the central feature of client/server
architecture?

Th
e clients and servers may use different hardware and most likely will use different Operating
Systems but these differences are irrelevant when a common communication protocol is used. It is
the communication software that allows the client and server to i
nteroperate. The functions
performed by the application can be split up between client and server in way that optimizes
platform and network resources and that optimizes the ability of users to perform various task and
cooperate with each other to in using

shared resources.


The central feature of a client/server architecture is the communications software. Example TCP/IP.


23)

What enables the client and server to interoperate? What is one essential factor in the
success of a client/server environment? Wh
at is the difference between the presentation
services module in the client workstation and the presentation layer of the OSI model?

Raamkanna Saranathan

Page
10

of
26

It’s the communications software (Protocol example TCP/IP) that enables clients and servers to
interoperate. One essential
factor in the success of client/server environment is the way in which
user interactions take place with the system as a whole.


The success of a client/server environment is largely determined by the design of the user interface
of the client computers. T
here is always a great emphasis in making the client interface Graphical
to enhance easy of use.


The presentation service module in the client workstation is responsible for providing a user
-
friendly interface to the distributed applications available in
the environment, where as, in the OSI
model the presentation layer of the transmitting end is responsible for the format of the file to be
transferred. ASCII or other character sets are chosen. If the file is to be encrypted or compressed
the appropriate e
ncryption and compression algorithms are done.


24)

Briefly describe the different classes of client/server applications.


Host Based Processing

A host
-
based system is not a true client server computing. All processing is done on a central host.
In most ca
ses, users interface is through a dumb terminal (keyboard and monitor). In this type of
setup, even if the client user is using a workstation computer system it’s functionality is reduced to
terminal emulation.


Server Based Processing

In this application
the clients simply provides a user interface while the server provides all the
processing. This setup is typical of early client/server setups and was commonly used at the
department level.


Client Based Process

Client Based application does most applicati
on processing at the client system, except for data
validation and database logic function which are optimized to be performed on server class
machines.


Cooperative Processing

A cooperative processing client/server application is the most intuitive


it o
ptimizes performance
by taking advantage of the strengths of the both the client and server.


25)

What is a file cache? What is principle of locality? Explain distributed file cacheing used
in the Sprite operating system.

A cache is an area of memory wher
e a recently accessed file is stored so that future access to the
file will take less time.


The principle of locality states that the use of a local file cache should reduce the number of remote
server access that must be made.


In the Sprite OS, any numb
er of remote processes may open a file for read and create their own
client cache but when an open file request to a server request write access and other processes
have the file open for read access, the server takes two actions.




It notifies the writing

process that although it may maintain a cache, it must write back all
altered blocks immediately upon update.

There can only be one client requesting write at a
time.



Second the server notifies all reading processes that have the file open that the file
is no longer
cacheable.


26)

Explain the cache consistency problem and how it is handled in Sprite?

Cache consistencies exist when caches contain the exact copies of remote data. Cache consistency
problems occur when remote data are changed and the corresp
onding obsolete local copies are not
discarded. Cache consistency happens at two levels.

Raamkanna Saranathan

Page
11

of
26




If a client adopts a policy of immediately writing any changes to a file back to the server, then
any other client that has a cache copy of the relevant portion of th
e file will have obsolete data.




If the client delays writing back changes to the server, the server will have obsolete data and
any other client requesting that file from the server will receive an outdated copy.


The method used to handle inconsistency i
n Sprite is answered in Question Number 25



27)

What is middleware and why is it needed? Explain the role of middleware in
client/server architecture.

Middleware is a set of tools that provide a uniform means and style of access to system resources
across

all platforms.


Middleware is needed because different vendors implement different features in their
hardware/software that makes it impossible for communications to take place.


Middleware facilitates different types of hardware/software and networks to

interface and
communicate in a client/server architecture despite there differences.



28)

Illustrate how middleware enables the realization of the promise of distributed
client/server computing.




Middleware has both clie
nt and server components and enable an application or user at a client to
access a variety of services on servers without being concerned with differences among servers. All
applications operate over a uniform applications programming interface


middlewar
e cuts across
all client server platforms and is responsible for routing client request to the appropriate server.


29)

How can middleware be employed to overcome network and operating system
incompatibilities? What are the underlying mechanisms on which m
iddleware is based?

Raamkanna Saranathan

Page
12

of
26

A ba
ckbone net
work links for example, No
vel
l and TCP/IP networks. Middleware, running on

each
network component
s
,
ensures that all
net
wor
k user have transparent
access

to the appli
cation and
resources on any o
f the two networks.


Middleware is based on one of two underlying mechanisms:



Message Passing



Remote Procedure Calls


30)

What is message passing? Where is it used?

Message passing is the use of functions to facilitate request/response communication.


It
is used in communication between clients and servers



31)

Describe a simple client/server model indicating the protocols used and system calls
provided by its microkernel.



32)

Explain how client and server processes work using sample code.


33)

Describe
the methods used for addressing processes.


The two methods are:



Direct Addressing

This method sends messages which include a specific identifier of the destination process. The
receive message can be handled in one of two ways.

1)

The process explicitly ge
nerates a sending process. This process must know ahead of time from
which process, a message is expected.

2)

In some cases it is impossible to specify the anticipated source process. An example is a
printer
-
server process which will accept a printer request
message from any other processes.
For such an application, a more effective approach is the use of implicit addressing. In this
case, the source parameter of the receive message possesses a value returned when the
receive operation has been performed.



In
direct Addressing

In this case, messages are not sent directly from sender to receiver but rather are sent to a shared
data structure consisting of queues that can temporarily hold messages. Such queues are generally
referred to as mailboxes. Thus, for two

processes to communicate, one process sends a message
to the appropriate mailbox and the other process picks up the message from the mailbox.


34)

Explain the operation of synchronous and asynchronous ‘send’ primitives.

In asynchronous operations, send pr
ocesses are not suspended after a send or receive operation.
On the other hand synchronous operations do not return control to the sending process until the
message has been transmitted (unreliable protocol) or until the message has been sent and an
acknow
ledgement received (reliable protocol).


35)

In the case of unbuffered message passing how is the following situation handled: “The
‘send’ is done before the ‘receive’.”

In unbuffered message passing, when send is done before the receive, the server’s ker
nel does not
know which of its processes is using the address and therefore does not know where to copy the
message.


There are two ways to solving this problem


1)



Discard the message



Let the client timeout



Hope server calls receive before client retransm
itts


Raamkanna Saranathan

Page
13

of
26

2)



Have the receiving kernel keep incoming messages around for a little while just in case an
appropriate receive is done shortly.



Whenever an unwanted message arrives, a timer is started.



If the timer expires before a suitable “receive” happens, the

message is discarded.

36)

Explain the use of mailbox in buffered message passing.

The problem of the send done before the receive in unbuffered message passing is solved by
having the receiving kernel keep incoming message around for a little while. Even
though this
method reduces the chance that a message will have to be discarded, it introduces the problem of
storing and managing prematurely arriving messages.


Buffers are needed and have to be allocated, freed, and generally managed. A mailbox is a data

structure that conceptually deals with buffer management. Mailboxes are used in indirect message
addressing. They store messages from sending stations until the receiving station collects them.


37)

Tabulate the design issues for process communication pri
mitives and some of the
principal choices available.


Item

Option 1

Option 2

Option 3

Addressing

Machine Number

Sparse process address

ASCII names looked up
via server

Blocking

Blocking primitives

Non blocking with copy
to kernel

Non blocking with
interr
upt

Buffering

Unbuffered discarding
unexpected messages

Unbuffered, temporarily
keeping unexpected
messages

Mail boxes

Reliability

Unreliable

Request
-
ACK
-
Reply
-
ACK

Request
-
Reply
-
ACK


38)

What are the packet types used in client/server models? Indicate t
he code, source and
destination and purpose of these packet types.


39)

With a diagram trace the steps of an RPC mechanism.

1.

The client procedure calls a client stub passing parameters in the normal way.

2.

The client stub marshals the parameters, builds

the message, and calls the local OS.

3.

The client's OS sends the message (using the transport layer) to the remote OS.

4.

The server remote OS gives transport layer message to a server stub.

5.

The server stub demarshalls the parameters and calls the des
ired server routine.

6.

The server routine does work and returns result to the server stub via normal procedures.

7.

The server stub marshals the return values into the message and calls local OS.

8.

The server OS (using the transport layer) sends the mess
age to the client's OS.

9.

The client's OS gives the message to the client stub.

10.

The client stub demarshalls the result, and execution returns to the client.


Raamkanna Saranathan

Page
14

of
26



Diagram: Page 73 Tanenbaum


40)

What is parameter marshalling and unmarshalling? What ar
e some issues involved in it?
How can they be solved?

Marshalling
is the packing of parameters into a message.

Unmarshalling

or demarshalling is the unpacking of parameters from a message.


Issues involved are



Parameter passing



Parameter representation


P
arameter passing
can simply be done by call
-
by
-
value (copy messages to destination) or Call
-
by
-
reference (achieved by copy and restore, cannot be implemented otherwise because of the
large overhead required to maintain a distributed wide variable). Call
-
by
-
value is simple for a
remote procedure call.


This issue is solved by using call
-
by
-
value wherever possible as it is simply copied into the message
and sent to the remote system which reduces overheads.


Parameter representation
issues arise when differen
t OS uses different encoding and
representation schemes (like ASCII or EBCDIC).


This issue is solved by by providing a standardized format for common


objects such as integers,
floating point numbers, characters, and character strings.


41)

What is dynam
ic binding? What is meant by ‘registering’ the server and how is it done?

Dynamic binding is a process of automatically matching up the client and the server using
addresses that will automatically match the client to the server despite the address changes
.


The process of making the existence of a server known by passing a message to a program called a
binder is known as registering the server.


To register the server, the server gives the binder its name, version number, a unique identifier
(32
-
bit long)
and a handle used to locate it.


When the client calls one of the remote procedures for the first time, say read, the client stub sees
that it is not yet bound to a server, so it sends a message to the binder asking to import version
3.1 of the file_server

interface. The binder checks to see if one or more servers have already
exported an interface with this name and version number. If no currently server is willing to
support this interface, the read call fails.


Raamkanna Saranathan

Page
15

of
26

42)

Describe the advantages and disadvantag
es in using a binder to export and import
interfaces.



Advantages



Load balancing



Polling servers and deregistering them and providing fault tolerance



Assists in authentication



Version checking


Disadvantages



Overhead of exporting/importing interface (use
s up time)



Bottleneck



Replication of binders


updating requires layer number of message.


43)

Discuss the issues involved in implementing RPC protocols.

Choice of RPC protocol

Theoretically, any old protocol will do as long as it gets the bits from the cl
ient’s kernel to the
server’s kernel. Practically, there are several decisions to be made. The choice made can have a
major impact on the performance of the system. The first decision is between a connection
-
oriented protocol and a connectionless protocol.


Use of a standard general purpose protocol or a specific designed protocol for RPC

Since there are no standards in this area, using a custom RPC protocol often means designing your
own.

Some distributed systems use IP (or UDP, which is built on IP) as th
e basic protocol. This choice
has several advantages:



The protocol is already designed saving considerable work.



Many implementations are available again saving work.



These packets can be sent and received by nearly all UNIX systems.



IP and UPD packets are

supported by many existing networks.


Packet and message length

Doing an RPC has a large fixed overhead, independent of the amount of data sent. Thus, reading a
64K file in a single 64K RPC is vastly more efficient than reading it in 64 1K RPCs. It is the
refore
important that protocols and network allow large transmissions. Some RPC systems are limited to
small sizes.


44)

What problems do we need to consider when implementing acknowledgements in
client/server systems?


There are two primary issues in impl
ementing acknowledgements:



Stop
-
and
-
wait protocol



Blast protocol


Stop
-
and
-
wait protocol

For example, a client wants to write a 4K block of data to a file server, but the system cannot
handle packets larger than 1K. In this instance, the stop
-
and
-
wait prot
ocol can be used. When a
client sends a packet 0 with the first 1K, then wait for an acknowledgement from the server, after
receiving this acknowledgement, then the client sends the second 1K, wait for another
acknowledgement and so on, until completed.


U
sing this protocol, if a packet is damaged or lost, the client fails to receive an acknowledgement
on time so it retransmits the one bad packet.


Blast protocol

The client sends all the packets as fast as it can. With this method, the server acknowledges t
he
entire messages when all packets have been received, not one by one.


Raamkanna Saranathan

Page
16

of
26

Using this protocol, the server is faced with a decision when, say, packet 1 is lost but packet 2
subsequently arrived correctly, it can abandon everything and do nothing, waiting for

the client to
timeout and retransmit the entire message. Or alternatively, it can buffer packet 2 (along with 0),
hope that 3 comes incorrectly, and then specifically asks the client to send it packet 1.


45)

What is meant by the term ‘critical path’ from

client to server?


The critical path is the sequence of instructions that is executed on every RPC.


It starts when the client calls the client stub, proceeds through the trap to the kernel, the message
transmission, the interrupt on the server side, the
server stub, and finally arrives at the server,
which does the work and sends the reply back the other way.


46)

Explain the problem of copying in client/server systems.


Copying is an issue that frequently dominates RPC execution times.


47)

What are sweep al
gorithms? What are two ways by which timer management is
implemented in client/server systems?

Sequential algorithms are algorithms that operate by periodically making a sequential pass through
a table.


48)

What is process migration? What is the motivatio
n for process migration?

Process migration is the transfer of a sufficient amount of the state of a process from one computer
to another for the process to be executed on the target computer.


The following reasons provide motivation for process migration:

Load sharing

Load sharing involves moving processes from heavily loaded to less loaded computers. Load
balancing improves the overall performance of a distributed computer system.


Communication performance

If there are several processes that interact int
ensively they can be moved to the same node to
reduce the cost (overhead) of communication for the duration of the processing interactions. If a
process will be performing operations on a data file or set of files that are significantly larger that
the pro
cesses required, it is more feasible to move the process to the location of the datat than to
move the data to the location of the process.


Availability

Processes that run for extended (long) times may need to move to survive faults for which advance
noti
ce can be achieved. The operating system may be able to forecast the machines’ or resources’s
downtime and alert the process to migrate in order to continue execution.


Utilizing special capabilities

If there is a unique hardware or software resource that
a process needs during its execution which
is not at its current execution node the process can be migrated to a node which has the required
facilities.


49)

Discuss the issues that need to be addressed in designing a process migration facility.

Issues are
:



Initiation of migration



What is migrated



Initiation of migration

If OS initiates the migration process, it needs to monitor the load on current computer as well as
receive load reports from other computers.


Raamkanna Saranathan

Page
17

of
26

If process initiates migration it needs to b
e aware of the facilities available on the distributed
network.


What is migrated?

The aspect of which portion of the process is migrated depends on the situation and are listed
below:

a.

Eager (ALL)

Transfers the entire address space at the time of migration



cleanest approach.
But takes long if address space is large and may waste time if the processes don’t need the
whole of the address space.

b.

Precopy
Execution continues on the source node while the address space is transferred


pages modified on the sour
ce during the Precopy have to be copied again


reduces the time a
process is frozen for the migration process

c.

Eager (dirty)
Transfers only pages of the address space that are in main memory and have
been modified. Other pages are transferred only on deman
d


problem is that source machine
have to be continuously involved in the life of the process.

d.

Copy on reference
PAGES are transferred when referenced

e.

Flushing
All pages are flushed from main memory and placed on the disk. The pages are then
accessed from

the disk


immediately frees main memory.


50)

Explain with a suitable diagram the mechanism of ‘negotiating’ a migration.

The decision to migrate a process must be reached jointly by two starter proceses on the source
node and one on the destination node


a.

The starter that controls the source system (S) decides that a process P should be migrated to
a particular destination (D). It sends a message to D’s starter requesting the transfer.

b.

If D’s starter is prepared to receive the process it sends back a posi
tive acknowledgement.

c.

S’s starter communicates this decision to S’s kernel via service call (if the starter is running on
S or to the KernJob of machine S if the starter runs on another machine. KernJob is used to
convert messages from remote processes int
o service calls.

d.

The kernel on S then offers to send the process to D. The offer includes statistics about P such
as its age, processor and communication loads.

e.

If D is short on resources it may reject the offer. Otherwise the kernel on D relays the offers

to
its controlling starter. The relay include the same information from S

f.

The starter’s policy decision is communicated to D by a MigrateIn call.

g.

D reserves necessary resource to avoid deadlock and flow control problems and then sends an
acceptance to S.


See page 613
--

stallings


51)

What is eviction? How is this capability used in Sprite?

Eviction occurs when a target computer throws out (evict) a process that has been migrated to it.
Eviction may occur if the target computer was idle when the process w
as migrated, but after
migration it has to deal with user input. It then evicts the migrated process in order to facilitate the
user.


Sprite uses the method of making the computer on which a process originated its home (home
node). If the process gets mig
rated it becomes a foreign process on the destination computer. If
for any reason the destination computer evicts the foreign process, it is forced to migrate back to
its home node.


Here is how the process works in sprite:

a.

A monitor process at each node k
eeps track of current load to determine when to accept
new foreign processes. If the monitor detects activity at the computer it initiates eviction
procedure on each foreign process.

b.

When a process is evicted it is migrated back to it’s original home node.

It may be migrated
to another node if one is available.

Raamkanna Saranathan

Page
18

of
26

c.

When a processes is selected to be evicted it is suspended. Even though this extends the
time during which a process if frozen it makes the workstation better able to respond to
user input.

d.

The entir
e address space of an evicted is transferred to the home node. The time to evict a
process and migrate it back to its home node may be reduced substantially by retrieving the
memory image of a evicted process form its previous foreign host as reference. Th
is
procedure would compel the foreign host to dedicate resources and honor request for an
evicted process longer than necessary.


52)

Explain the following terms in the context of distributed global states:



Channel



State



Snapshot



Global state



Distributed
snapshot


Channels
exist between two processes if they exchange messages. A channel is the path by which
messages are transferred. For convenience channels are thought of as being unidirectional. If two
process exchange messages two channels are required


one for each direction.


State
of a process is the sequence of messages that have been sent and relieved along channels
incident with the process.


Snapshot
records the state of a process. Each snapshot includes a record of all messages sent and
received
on all channels since the last snapshot.


Global State
is the combined state of all processes


Distributed snapshot
is a collection of snapshots


one for each process


53)

Explain using diagrams the difference between consistent and inconsistent global st
ates.


See page 617


Stallings

A global state is consistent if for every process state that records the receipt of the message, the
sending of that message is recorded in the process state of the process that sends the message.


An inconsistent global sta
te arises if a process has recorded a receipt of the message, but the
corresponding sending process has not recorded that the message has been sent.


54)

Distinguish between the migration scenarios of self migration and one in which another
process initiat
es the migration.


When a process migrates itself:


a.

It selects a target machine and sends a remote tasking message. The message carries a
part of the process image and open file information

b.

At the receiving site the kernel server process forks a child, giv
ing it this information

c.

The new process pulls over data, environment, arguments or stack information as needed to
complete its operation. Program text is copied if it is dirty or demand paged from the global
file system if it is clean.

d.

The originating proc
ess is signaled on the completion of the migration. This process sends a
final done message to the new process and destroys itself.


When another process initiates migration:

The same process is carried out with the exception that the process to be migrate
d must be
suspended so that it can be migrated in a non
-
running state.

a.

The process image and entire address space is copied to a file.

b.

The original process is then destroyed

c.

The process is then recreated on the target machine from the file.

Raamkanna Saranathan

Page
19

of
26


55)

Descri
be the Chandy
-
Lamport Distributed Snapshot algorithm and its uses with an
example.


56)

List the requirements of a facility that needs to support mutual exclusion.

Mutual exclusion is giving access to a resource being competed for to only one process.




Mut
ual Exclusion must be enforced. Only one process is allowed into its critical section, among
all process that has critical sections for the same resource or shared object.



A process that halts in its non
-
critical section must do so without interfering with

other
processes.



It must not be possible for a process requiring access to a critical section to be delayed
indefinitely : no deadlock or starvation



When no process is in a critical section, any process that request entry to its critical section
must be p
ermitted to enter without delay.



No assumptions are made about relative process speeds or number of processors.



A process remains inside its critical section for a finite time only.


57)

Describe a model for mutual exclusion in distributed process manageme
nt.

A model for mutual exclusion in distributed process management is the centralized algorithm that
uses one node as a control node for all access to shared objects. If a process requires access to a
critical resource, it issues a request to the resource
controlling process. This process will send a
request to the control node, which will reply with a permission message when the shared object
becomes available. The process that uses the shared resource then sends a release message to the
control node.


In
this system only the control node makes resource allocation decisions and all necessary
information is concentrated in the control node (information such as the identity and location of all
resources and the allocation status of each resource).


The drawba
ck of this model lies in the fact that services are disrupted if something goes wrong
with the control node and there might be bottlenecks at the control node because of message
passing used for communication between the processes, control node and the sha
red resources.


58)

Compare centralized and distributed algorithms for mutual exclusion.

The centralized algorithm uses one node as a control node for all access to shared objects. If a
process requires access to a critical resource, it issues a request to

the resource controlling
process. This process will send a request to the control node which will reply with a permission
message when the shared object becomes available. The process uses the shared resource then
sends a release message to the control no
de.


In this system only the control node makes resource allocation decisions and all necessary
information is concentrated in the control node (information such as the identity and location of all
resources and the allocation status of each resource)

The
drawback of this model lies in the fact that services are disrupted if something goes wrong
with the control node and there might be bottlenecks at the control node because of message
passing used for communication between the processes, control node and t
he shared resources.


A distributed algorithm differs from the centralized algorithm in the following areas.



All node have equal amount of information (there is no controlling node)



Each node has only a partial picture of the total system and must make dec
isions based on this
information.



All nodes bear equal responsibility for the final decision.



Failure at one node does not bring the entire system down.



There is no system wide common clock with which to regulate the timing of events.


Raamkanna Saranathan

Page
20

of
26

59)

What is time
-
sta
mping? How is it useful in the ordering of events in a distributed
system? Explain with an example and diagram.


Time
-
stamping is a method which orders events in a distributed system without using physical
clocks.


In a distributed system processes intera
ct using messages


by extension, events are associated
with messages. To avoid ambiguity events are associated with sending of messages, not receipt of
messages. Each time a process transmits a message an event is defined that corresponds to the
time “t”
the message leaves the process.


Time
-
stamping is used to order events consisting of the transmission of messages.


See page 624


Stallings


60)

What is a distributed queue? How can distributed mutual exclusion be provided using
this concept? Indicate th
e assumptions made.

A distributed queue is an early approach for providing distributed mutual exclusion. It works as
follows:



A distributed system consists of N nodes, uniquely numbered from 1 to N. Each node contains
one process that makes request for mut
ual exclusive access to resource on behalf of other
processes; this process also serves as an arbitrator to resolve incoming request that overlap in
time.



Messages sent from one process to another are received in the same order in which they are
sent.



Ever
y message is correctly delivered to its destination in a finite amount of time.



The network is fully connected so that every process can send messages directly to every other
process without requiring an intermediate process to forward the message.


Distri
buted mutual exclusion is provided by extending the concept of centralized queue to a
distributed system.


All sites have a copy of the queue. Time stamping is used to determine order
in which resource requests are to be granted.


The assumption made is t
hat messages are received in the same order in which they are sent. And
that the message is delivered in a finite amount of time.


61)

Describe the three types of messages used in the algorithm used to provide mutual
exclusion using a distributed queue. In
dicate how these messages are used by the
processes in this algorithm.


(Request, T
i
, i)


A request for access to a resource is made by P
i

(Reply, T
j
, j)


P
j

grants access to a resource under its control

(Release, T
k
, k) P
k
release a resource previously

allocated to it.


The algorithm is as follows:

1)

When Pi requires access to a resource, it issues a request (Request, Ti, I), time
-
stamped
with the current local clock value. It puts this message in its own arrayat q[i] and sends the
message to all other pr
ocesses.

2)

When Pj receives (Request, Ti, I), it puts this message in its own array at q[i]. If q[j] does
not contain a request message, then Pj transmits (Reply, Tj, j) to Pi.

It is this action that implements the rule described previously, which assures th
at no earlier
request message is in transit at the time of a decision.

3)

Pi can access a resource (enter its critical section) when both of these conditions hold:

a)

Pi’s own request message in array q is the earliest request message in the array;
because messa
ges are consistently ordered at all sites, this rule permits one and
only one process to access the resource at any instant.

b)

All other messages in the local array are later than the message in q[i]; this rule
guarantees that Pi has learned about all reques
ts that preceeded its current request.

Raamkanna Saranathan

Page
21

of
26

4)

Pi releases a resource by issuing a release (Release, Ti, I), which it puts in its own array and
transmits to all other processes.

5)

When Pi receives (Release, Tj, j), it replaces the current contents of q[j] with this
message.

6)

When Pi receives (Reply, Tj, j), it replaces the current contents of q[j] with this message.


62)

Show how an algorithm based on the distributed queue enforces mutual exclusion, is
fair, and avoids deadlock and starvation. Indicate a measure of ef
ficiency for this
algorithm.

Mutual Exclusion
: All request for entry into critical sections are handled according to the order of
messages imposed by the time
-
stamping mechanism. Once a process P
i
decides to enter its critical
section, there can be no othe
r request message in the DS that was transmitted before. Mutual
Exclusion is enfored because Pi would have received a message for all other nodes and determine
that it’s message is the oldest.

Fair
: Because the timestamp mechanism is used all process have
equal opportunity.

Deadlock free
: Deadlock cannot occur because the timestamp is consistently maintained at all
sites.

Starvation free
: Once the process P is finished with the critical stage it transmits the release
message so that another process can ente
r the critical stage.


The number of messages required by this algorithm is a measure of its efficiency. 3 * (N
-
1)
messages are required.

(N
-
1) request, reply and release



63)

Explain the refinement proposed by Ricart to Lamport’s algorithm for providing

mutual
exclusion based on the distributed queue. Compare the efficiencies of the two
algorithms.

The refinement reduces the number of messages by eliminating the release messages.


When a process wants to enter its critical section it sends a time stamped

request message to all
other processes. When it receives a Reply from all other processes it enters its critical section.
When a process receives a Request from another process it must send a reply. This reply is sent
immediately if the process does not w
ish to enter its critical section. If the process which receives
the request want to enter its critical section it must compare the timestamp of its request with the
timestamps of last request received. If the last received request is older it sends the re
ply, but if its
request is older it can enter its critical section.


This refinement is more efficient because it uses less messages 2 * (N
-
1)


one for request and
one for reply.


64)

How can mutual exclusion be provided by passing a token among the parti
cipating
processes? Explain the algorithm and compare efficiency with the algorithms proposed
by Lamport and later by Ricart.

(Similar to token ring networking concept)


A token is an entity the can only be owned by one process (consider the token to be pe
rmission)
The process holding the token can enter its critical section without asking permission (it already
has permission). When the process completes its critical section it passes the token to another
process.


Initially the token is arbitrarily assign
ed to a process. When a process wishes to enter its critical
section it may do so if it has the token, if not it broadcast a timestamped request message to all
other processes and waits until it receives the token. If a process was using the token, it deci
des
which other process should get the token by searching a the request array (in the order
j
+ 1, j +
2,…., 1,2…,j
-
1) for the first entry request [k] such that the timestamp for P
k
‘s last request for the
token is greater than the value recorded in the to
ken for P
k
’s last holding of the token. That is the
request K must be greater the token K1.


Raamkanna Saranathan

Page
22

of
26

This algorithm is more efficient than the DC message queue because it uses only N messages (N
-
1
to broadcast the request and 1 to transfer the token) when the requ
esting process does not hold
the token.


There is no need for messages it the process already holds the token.


65)

Why is deadlock handling difficult in distributed systems? Explain the two types of
distributed deadlock.

Deadlock is difficult to handle i
n a distributed computing system because no node has accurate and
timely knowledge of the current state of the overall system


the root of this problem is that
communication takes place by messages and message passing involves delays which are
unpredictab
le.


Two types of deadlocks are:



Resource allocation deadlocks



Communication of messages.


Resource allocation deadlocks

occurs when a process tries to access a resource that is already
in use.


Communication deadlocks

occurs when processes have to wait fo
r messages from other
processes.


66)

What are the conditions that lead to deadlock? Explain the phenomenon of ‘phantom’
deadlock. How is it caused?


Resource Deadlock occurs if the following conditions exist:

Mutual exclusion



only one process can use a
resource at a time

Hold and wait



a process may hold allocated resources while awaiting assignment of other
resources

No preemption



no resource can be forced away from a process holding it

Circular wait



A closed chain of processes exist such that each

process holds at least one
resource needed by the next process in the chain


Phantom deadlock
occurs when a deadlock is falsely detected in a distributed computing system
due to the lack of a global state such as would exist in a centralized system. Phant
om deadlock
happens as a result of unpredictable delays in message passing which makes an overall state of the
system incorrectly detected by the cycle detection process.


Consider three processes P1, P2, P3 and two resources Ra, Rb.

If P3 owns Ra and P1 o
wns Rb

P3 issues a release Ra

P3 then request Rb.


If the second message (the request Rb) reaches the cycle detecting process before the first
(release Ra) a deadlock will be detected but none would exist.


67)

Describe the ‘wait
-
die’ and ‘wound
-
wait’ me
thods proposed by Rosenthal et al to prevent
transactions in a distributed database getting deadlocked over shared data objects.

These two methods utilizes timestamps generated when the transactions (transaction instead of
process because they were develop
ed for database) are first created. The transaction will always
keep their original timestamps. If two transactions compete for a resource their timestamps are
compared and a determination is made.


The methods are as follows:


Wait


die:
If a transactio
n T1 currently holds a Resource R and Transaction T2 makes a request
for the same resource. The timestamps of both transactions e(T1) and e(T2) are compared. If T2 is
older it is blocked until T1 releases R (either by actively issuing a release or being ki
lled when
requesting another resource) If T2 is younger then T2 is restarted


using its original timestamp.

Raamkanna Saranathan

Page
23

of
26

Older transactions have greater priority in conflicts. A killed transaction is revived with its original
timestamp so as it grows older it gains pr
iority.


if

(e(T2)<e(T1))



halt_T2(‘wait’);

else


kill_T2(‘die’)


Wound
-
wait:
This method compared the timestamps and immediately grants the request of an
older transaction by killing a younger transaction that is using th
e required resource. In contrast to
the wait
-
die method, a transaction never has to wait for a resource being used by a younger
transaction.


if
(e(T2)<e(T1))



kill_T1(‘wound’);

else


halt_T2(‘wait’);


68)

Compare and analyze the strengths and weaknesses
of distributed deadlock detection
strategies.

Centralized control


Pro:

In this approach only one site is responsible for deadlock detection


all request and release
messages are sent to this process as well as to the process that control the particular r
esource.
Having the overall knowledge of the DCSys the central process can detect deadlock.

Con:

There can be large communications overhead because of messages to and from the
controlling node.

Any failure in the controlling node will have devastating eff
ects on the system


Hierarchical control


Pro:

Different sites are organized into a tree structure with one site serving as the root of the tree.
At all nodes except the leaf node information about the resource allocation of all dependent nodes
is collecte
d. This allows deadlock detection to be done at lower levels than the root node. A
deadlock at a node will be detected by the node that is common to all sites whose resources are
among the objects in conflict. There is no single point that can cause system

wide failure and
deadlock resolution activity is limited if most potential deadlocks are relatively localized


Con:

It may be difficult to configure systems so that most potential deadlocks are localized;
otherwise there will be more overhead than other a
pproaches.


Distributed control


All processes cooperate in the deadlock detection function.

Pro:

No single node makes the entire system vulnerable to failure and no node is bombarded with
deadlock detection activity.

Con:

This system may be cumbersome be
cause several sites may detect the same deadlock and
the algorithms for this approach are difficult to design because of timing of necessary messages.


See page 635


69)

Explain Johnston et al’s distributed algorithm for resource deadlock detection. Give a
n
example.

The algorithm deals with a di
str
ibute
d data
base system
in which each site maintains a portion of
the

database

and transaction may
be init
iated from each site.
A transaction can have
at most o
ne
Raamkanna Saranathan

Page
24

of
26

outstanding resource request. If a transac
tion needs more than one data object, the sec
ond data
object can be requested only after the first data object has bee
n gran
ted.


Associated
with each data object
i

at
a

site

are
t
wo parameters: a unique id
entifier Di
, and the
variable Locked_by (Di). This latte
r variable has the value nil if the data obje
ct is not locked by any
transaction; otherwis
e its value is the identifier of the locking
transaction
.


Associated with each tra
nsaction j at a site are four parameters
:



A unique ide
ntifie
r Tj



The variable Held_by (Tj), which is set to ni
l if transaction Tj is executing or in a Read
y state.
Otherwise, its value is the transact
ion that is holding the data object required by transact
ion Tj.



The variable
Wai
t_for (Tj), which has the value nil

if transac
tion
Tj is not waiting for any other
transaction. Othe
rwise
, it
s value is th
e identifier of the transaction
that is at the head of an
or
d
ered list of transactions that are blocked.



A que
ue Request_Q (Tj),
which co
ntains all outstanding requests for data obje
cts being held by
Tj. Each element in the que
ue is of the form (Tk, Dk), where Tk is the r
equesting transaction
and Dk is the data obje
ct held by Tj.


For example, suppose that transa
c
tion T2 is w
aiting for a data object

held by T1, which
is inturn,
waiting for a data object held by
T0. Then the relevant parameters have the fol
lowing values:


Transaction

Wait_for

Held_by

Request_Q

T0

N
il

N
il

T1

T1

T0

T0

T2

T2

T0

T1

Nil


This example

highlights the difference betwee
n Wait_for (Ti) and Held_by (Ti), N
either process can
proceed until T0 releases
the data object needed by T1 which can then e
xecute and release the
data object needed by T2.


70)

When does a deadlock occur in message communication? What is dependence set of a
process? Define deadlock in a set, S of processes.

Deadlock occurs in message communication when eac
h of
a

group of process
es

is waiting for a
message from another member of the group and there are no messages in transit.


A dependence set (DS) of a process consist of all the processes from which P is expecting a
message. In a common case P can proceed i
f any of the expected messages arrives. There are
cases where P can only proceed if all the messages arrive.


Deadlock in a set of processes can be defined as follows:
-

a.

All the processes in S are halted, waiting for messages

b.

S contain the dependence set of

all processes in S

c.

No messages are in transit between member of S

Any process in S is deadlocked because it can never receive a message that will release it.


71)

Illustrate by a process graph the difference between a message deadlock and a resource
deadlock. What i
s a ‘knot’?



72)

Distinguish between direct and indirect store
-
and
-
forward deadlock. Explain how a
structured buffer pool is used?

A direct store and forward

mechanism utilizes a common buffer pool from which packets are
assign
ed to buffers on demand. Even if there are just two nodes, A and B, a situation can arise
where node A is filled with packets destined for node B and node B is filled with packets for node A.
None of the two nodes can forward the packets because none of th
e two node
s

can receive any
more packets.

A solution is to use separate fixed size buffers for each link or not to allow any one link to acquire
all the buffer space.

Raamkanna Saranathan

Page
25

of
26


Indirect store and forward
deadlock occurs when the queue to the adjacent node in one
di
rection is full with packets destined for the next node beyond.


A structured buf
fer pool
organizes buffers in a

hierarchical fashion. The pool of memory at level
0 is unrestricted; any incoming packet can be stored there. From level 1 to level N buffers
are
reserved as follows:
-

(N is the maximum hops {routers})
.

a. Buffers at level k are reserved for packets that have traveled k hops already


Buffers fill up progressively from level 0 to N
.

If all buffers up to k level are filled arriving packets that h
ave covered k or less hops are discarded
.

This implementations eliminates both direct and indirect store and forward deadlocks


73)

With a diagram describe the five
-
state process model.



New:

A process that has just been cr
eated but not yet admitted to the pool of executable
processes by the OS.


Ready:

Processes that are prepared to execute when given the opportunity.


Blocked:
A process that cannot execute until some event occurs such as the completion of an I/O
operation.


Running:
The process that is currently being executed. In a computer with a single processor, only
one process can be in this state at a time.


Exit:
A process that has been released from the pool of executable processes by the OS either
because it halte
d or because it aborted for some reason.


74)

Explain when a process transitions from ‘run
ning’ state to ‘blocked’ state.

A state change from running to blocked occurs when a resource which is necessary is not currentl
y
available. The process may require a service of the OS that cannot be provided immediately, a file
that
is
unavailable or an unavailable IO device
.


75)

When is a process
suspended? How is swapping useful? What are the typical elements of
a process image?

A process is said to be suspended when it is removed from the main memory and placed on the
secondary memory to make room for a new process, which prevents the CPU state t
o be idle.


A process

is suspended for the following reasons:



Swapping



Other OS
r
easons



Interactive user requests



Tim
ing



Parent process request


Swapping involves moving a part or all of a process from main memory to disk.


An IO operation.

Blocked

Ready

Running

Exit

New

admit

Timeout

Release

Event Wait

Dispatch

Raamkanna Saranathan

Page
26

of
26


Swapping allows execution of ready processes while others are blocked instead of having the CPU
idle
and wait for processes in mem
ory to move from bl
o
ck
ed

to ready state.



Elements of a process image

are:



User Data



User Program



System Stack



Process Control Block



76)

List the elements of a process control block. What is the role of a process control block?


Elements of a process control block are:



Process Identification



Process state information



Process control information


The PCB is the most important

and control data structure in an OS. Each PCB contains all the
information about a process needed by the OS. The blocks are read and modified by every module
in the OS including those involved with scheduling resource allocation interrupt processing and
p
erformance monitoring and analysis. One can say that a set of PCBs defines the state of the OS.
All routines in the OS go through a handler routine whose only job is to protect the process control
blocks and is the sole arbiter for reading and writing thes
e blocks.