Lecture One

jaspersugarlandSoftware and s/w Development

Dec 14, 2013 (4 years and 3 months ago)


Lecture One

Chapter 1 Introduction

1.1 Essentials of Operating Systems

1.1.1 What is an operating system?

A system software aimed to

1) manage computer resources.

2) enhance the computer performance.

3) control the execution of programs.

4) provide more f
riendly user interface.


1) An operation system is also called as an operating systems.

2) Difference between system software and application software:

1.1.2 Resources

Includes hardware and software.

Hardware: CPU, memory and storage, I/O devices.

Software: programs, data.

Notes: files are data.

1.1.3 Overview of a computer system

Hierarchical overview of a computer system shown as in Fig. 1

4 Layers; each layer with its special functions and interface to upper layers.

Where is an operation syste

To hide the complexity of the hardware.

1.2 Formation and development of Operating Systems

1.2.1 Formation

1st generation computer period:

Manual operation: programmer is operator.

machine language or assembling language.

sequential execution.

awbacks: simple control, easy fault, slow process.

2nd generation computer period:

manage program: programmer is not operator.

level languages such as Fortran and ALGOL.

management of computer resources.

concurrent execution.

3rd generation comput
er period:

expending the functionality of manage program.

emergence of operation systems.

1.2.2 Development

general evolvement:

mainframe to microcomputer.

user to multi

batch processing to time

Development of OS:


batch processing =>

user, batch processing =>

user, time
sharing =>

time =>

distributed or network.

1.2.3 Industry of operation systems

Microsoft: 80X86, MS
7.0, Windows1.0
NT. most widely used.

Unix: for wor
kstations, Berkley and Bell laboratory, most popular in academic field.

IBM: OS/2: academic success and commercial fail.

Apple: Macintosh, System 7. Seldom heard but very common in USA.

Future: OS for networks and distributed systems.

1.3 Functions of Oper
ation Systems

Looking at the table of contents to know the management of

processor, memory, file, device and job.

1.4 Taxonomy of Operation Systems

1.4.1 Single
user operation system

at early stage and later on due to the progresses in micro technology.

1.4.2 Batch
processing operation system

to enhance the effectiveness.

1.4.3 Real
time operation system

1.4.4 Time
sharing operation system

to get quick response.

1.4.5 Network operation system

1.4.6 Distributed operation system

1.5 Why to Study Operation

One of kernel courses of computer science.

A course for Graduate entrance Examination.

A course heavily required for subject GRE.

A course mandatorily selected by second major students.

Wording errors:

assemble = assembler ?


ge program?

memory and storage?

Lecture Two (
2 hrs

Chapter 2 Management of Processor

2.1 Interruption

Interruption in our daily life:

interrupt, priority, shielding, interrupt handler, interrupt service routine.

How the computer to deal with the inter

hardware (2.1.1 interrupt device) and

software (2.1.2 interrupt handler).

2.1.1 Interrupt Device

In this section we see how and what the hardware does.

3 main things: What is the interrupt device?

What are the jobs of the interrupt d

How does the interrupt device do their jobs? Functions of the Interrupt Device

1) find the interrupt source.

2) protect the site information.

3) initiate the interrupt handler.

In order to know how the computer 2) protect the s
ite information and 3) initiate the interrupt

We must know the PSW and Fig. 2

PSW(program status word)

It depicts the program status.

OS uses it to control the execution of program.

1. hiding bits for system interrupt(i.e., hardware interru

2. memory
protecting bits.

3. CMWP: Control, Machine, Waiting and suPervision.

Control bit is used for the software compatibility such as in the case from Windows 3.1 to
Windows 95.

Machine error
checking .

4. interrupt code.

5. instruction length.

6. condition code.

to indicate the execution result of various kinds of instructions.

7. hiding bits for program interrupts.

Compare it with the first field.

Look at Fig 2
to know how the interrupt device do the functions 2 and 3.


a good ques
tions from a student: what are the difference among multi

and interactive time
sharing system? see p. 36.

Lecture Three (
3 Hrs
) Interrupt Source

What is the interrupt source?

They are classified into 1) forced interrupts(
=unexpected), and 2) voluntary

Forced interrupt
s include 1. Processor interrupts (from hardware closely related with the
processor). 2. Program interrupts, 3. External interrupts (from the external hardware). 4. I/O

tary interrupts:

initiated by a special instruction in a program.

That is Trap in UNIX and INT in DOS.

It is called as voluntary because it is arranged intendedly by the user.

It is used in order to call the System Calls

programs already written by

OS to finish
cited functions.

So, it is also called system
call instruction. Interrupt Word

a word stored in the interrupt register in order to indicate the interrupt source.

Notes: 1) only for the forced interrupt. (Why? as an exercis
e. See Section for the
voluntary interrupts, the parameters of an system
call instruction ) 2) one such register for
each kind of interrupts.

Now it may be a good point to discuss the first job of the interrupt device

OS finds

the interrupt by
using the interrupt registers.

Let us review the whole work process of the interrupt device.

2.1.2 Interrupt Handling

In this section we’ll what and how the software do to process an interrupt.

2.l.2.1 Interrupt handler

1) prot

2) recognition: Look at the Fig. in Section Handler analyzes the old PSW to know
the reason.

3) handling

4) restoration

Whether or not to draw a figure here to reflect the whole procedure of the interrupt handler? Processor Interru

Happened due to the hardware faults close to the process, and so called as processor

1) Power fault 2) memory fault. Program Interrupt Voluntary Interrupt

See Section

Format of a system
call instruction: INT/TRAP


The parameter, equivalent to the interrupt word, will be stored in PSW. The remaining work
is similar as before. External Interrupt

1. clock interrupt: absolute clock and periodic clock.

2. interrupt from the central controller(or key
board in Microcomputer). Priority of Interrupt

What: to indicate the importance of an interrupt.

Why: more often than not, multiple interrupts occur simultaneously. At this moment, the

device respond to them according to their importa

so called interrupts priority.

1) the response order of Interrupt device when multiple interrupts occur simultaneously.

2)whether or not to allow the embedded interrupt. Interrupt Hiding

to allow or prohibit the response to an

interrupt by setting the corresponding hiding
bit in PSW.

Notes: 1) Sometimes, the condition of an interrupt will be reserved in the interrupt register
even when

the interrupt is prohibited. This is for the future response when the interrupt is a

2) Voluntary interrupts cannot be prohibited. Why?( Leave it as an exercise). Multiple Interrupts

1) More than one interrupt occur simultaneously.

2)When one interrupted is being processed, another interrupt occurs

1) multiple interrupts of the same sort occur, they processed by the same interrupt

according to the predetermined order.

2)multiple interrupts of the different sort occur, How do we do? We further divide into
3 cases:

Case 1
: Hiding : to prohibit some interrupts by setting the hiding bits.

2: embedding: When one interrupt is being processed by the interrupt handler,

to allow other interrupt to happen.

3: Program interrupt: When an interrupt handler

is executing, program interrupts are

(Why? as an exercise. )

Wording errors:

assemble = assembler ? assemble(or assembling language) should be assembly language.

processing? should batch system

manage program? should b
e manager program.

memory and storage?

Lecture Four

2.2 Multiprogramming

2.2.1 Concept

We explain what the multiprogramming is and why it is needed by case study.

Examples at pp. 33

We’ll give the gymnasium example in our daily life to facilitate the u
nderstanding of
multiprogramming. In a gymnasium, there are parallel bars,.....


1) For one specific program, its overall time is prolonged.

2) See the difference of multiprogramming and time

2.2.2 Implementation of Multiprogram

Three problems: Protection of Memory and Floating of Programs

See Chapter 3 for more details. Processor Scheduling

See Section 2.4 for more details. Resource Scheduling

See the following chapters.

Of course processor is also o
ne of the main resources and maybe the most important

2.3 Process

2.3.1 Concept


An execution of a program on a data set.


1: process = program + data

2: program : process = 1 : n

4: concurrency of processes.

2.3.2 Notation

PCB: Process Control Block.

Draw a Figure to show the brief data structure of PCB as in the textbook by Tan Yaoming.

2.3.3 Creating and Revoking

When to create or revoke: 1) at the very beginning ; 2) parent process forks a child process.

What to do: 1)as
sign a program block and data block in memory; 2) assign a PCB.

2.3.4 Status and Transition

A very important section: to know the three basic statuses of process and as many accidents
as possible; to trigger the transition among the three statuses.

2.4 Pr
ocessor Scheduling

2.4.1 Functions

2.4.2 Queuing Mechanism

linked queue and double
linked queue: the trade
off between overhead and

2.4.3 Scheduling Policies

1) Priority
first: Note there are many ways to decide the priority for a prog

Robin or time

level queues

Lecture Five

Chapter 3 Memory Management

Memory is classified into 1) primary/main and 2)secondary.

The former is called memory and the latter is called as secondary.

This Chapter is devoted
to the primary memory management.

The storage management can be found in Chapter 5.

3.1 Functions of MM

1) Address transformation/relocation: a mapping from logical address to physical address.
So it is important to know the two concepts: logical address

and physical address. See also
section 3.2.1.

2) Allocation and reclamation: to be finished by maintaining and operating a table. We’ll
discuss many techniques in this chapter for Allocation and reclamation.

3) Sharing: Why?

4) Protection: to avoid the i
nterference among different programs by restricting the access
rights. Two main rights, write and read, are restricted differently for 3 different areas, private,
shared, and others’.

5) Expanding or extending: to allow the size of a program is greater tha
n the size of whole
memory by using the virtual memory.

3.2 Continuous segments

3.2.1 Relocation

Definition: already given in Section 3.1.

Tow methods to relocate: 1) static: finished while loading the program, cannot be floated.

and 2)dy
namic: finished while executing the program, floatable.

Give a detailed example to explain the difference (pros and cons).

3.2.2 Single Continuous Segment

Principle: one segment, one program.

Relocation: static or dynamic

Protection: by fence register.

awbacks: low efficiency.

3.2.3 Multiple Fixed Continuous Segments

Principle: multiple segments, multiple programs, fixed, one allocation table.

Example: Fig. 3
3 and Fig. 3

Relocation: only static method because programs cannot be moved.

Drawbacks: bro
ken segments.

Notes: “How many segments and how large “ are predetermined by knowing the properties
of future programs.

3.2.4 Multiple Floatable Continuous Segment Principle (how to allocate and reclaim)

Multiple segments, multiple programs, float
able, two tables for used segments and free

Example: Fig. 3
6 and Fig. 3

Allocation Algorithms:

1) First
fit: waste of the broken segments, so:

2) Best
fit: the broken segments are too small to use, so:

3) Worst
fit: the broken segments are so

large as to be used for another program. Relocation and Protection

Dynamic as to allow the program

a pair of registers

base register and length register

for relocation and protection. Sharing

To be done by setting up two or man
y pairs of base&length registers. floating technology

When to float: 1) |All free segments |

| a program |, 2) one program needs self

Floating leads deadlock.

To avoid the deadlock, we need a secondary storage with size (1/2 + 1/3 +

+1/n ) V.

Lecture Six (Sept. 21)

3.2 Virtual Memory

3.1 introduces three continuous memory managements.

3.2 will introduce several incontinuous memory managements.

3.3.1 Introduction

What: logical memory whose size is determined by address format.

to allow | program |

| physical memory |.

method: to load part of program in memory.

reason: 1. temporal locality.

2. spatial locality.

3. mutual exclusive.

3.3.2 Paged Virtual Memory

We first introduce Paged Memory as an independent manageme
nt method which is not for

memory, then introduce paged virtual memory which can support virtual memory
management. Paged Memory

1. Principle:
1. | page | = | block |: two important concepts.

2.address format: at page 67. | page
number | determines how many pages;

| unit number | determines the page size.

3.whole loading of the program.

4. page table: a mapping of page number onto block number.

5. dynamic relocation.

. associate memory and fast table.

The fast table is one part of the page table which is stored in associate memory in order to
speed up the

up process of the page table. Fast table and page table (or associate memory and main
memory) are

el hierarchical structure which has the same problem of swap in/out as in 2
memory and storage

structure of paged virtual memory.

3. Allocation and reclamation

Done by the help of a bitmap.

4. Multiprogramming

. Each program has one page table.

. One page register.

. Fig. 3

5. Sharing and protection

Sharing: 1) data and 2) instructions. 1) is easy and 2) is difficult.

Protection: 1) special tag in page table. 2) Protection key : refer to PSW at pp.21
22. Paged Virtual Memory

1. Principle:
1. Partial loading of the program in memory.

2. To modify the page table by adding a tag field.

3. Page interrupt.

4. Others remain the same as paged memory.

5. Fig.3
15 at page 75.

2.Page scheduling


“jolt” phenomenon.

. hit ratio and miss ratio. Miss ratio means the possibility of the page interrupts.

Factors to affect the miss ratio: 1) number of blocks.

2) page size.

3) programming style.

4) page replacement algorithms .

. Four replacement algorithms:

1. random.

2. FIFO.

3. LRU.

4. LFU.

For each replacement algorithm, we should its definition and how to implement it.

Lecture Seven (Sept. 23)

3.3.3 Segmenta
tion and Paged Segmentation Segmentation = segmented memory

Principle: 1) one program consists of multiple segments.

2) Address format: Fig.3

3) Segment table: Fig3
18 to finish the relocation.

4) One segment register: t
o remember where the segment table

To summarize, explain Fig 3
19 Segmented Virtual Memory

Like the transition from paged memory to paged virtual memory, segmentation

can also support virtual memory by proper modification of the segment table.

inciple: 1) segment table at page 85: see how the segment table is enriched.

2) Relocation: if in memory, same as in; if out of memory, segment fault.

3) Segment fault
> interrupt.

4) Segment extending.

To summarize, expl
ain Fig 3
20. Dynamic Linking

Skimmed. Paged segmentation

The most complicated memory management among all the MMS discussed.

Segmentation can support virtual memory, i.e., the program size can be greater than the
memory size.

However one s
egment cannot be larger than the memory size.

Hence paged segmentation appeared.

Principle: 1) address format at page 90.

2) one segment table and many page tables. They constitute a 2
level hierarchical table.

3) fast table: 3 times of a
ccess to memory to catch a datum, and hence fast table is

4) relocation: Fig 3

5) interrupts due to: 1. segment fault, 2. page fault, 3. dynamic link, and 4) limit fault.

Wording errors:

1) “Page and
” should be “page

page frame

2) “Single continuous segment, multiple fixed continuous segments, multiple floatable

Continuous segments” should be “monoprogramming with
one partition

Multiprogramming with
fixed partitions
, multiprogramming with
riable partitions

3) “Base/length” register should be “base/limit” register.

4) “Process statuses” should be “process states”.

5) “Continuous” should be “contiguous” Continuous is temporal while contiguous is

Lecture Eight (Sept. 28)

In th
is lecture, we’ll review chapter 3 and possibly chapter2 if time allows. We will have
some periodical reviews and thus not have the semi
term or final
term review. I think the

Following table can greatly helps student obtain an overview of this chapter.










Way 1

Way 2

Way 3

Way 4

Way 5

Way 6

Way 7

Way 8

Way 1: Single continuous segment, monoprogramming with
one partit

Way 2: multiple fixed continuous segments, multiprogramming with
fixed partitions

Way 3: multiple floatable continuous segments, multiprogramming with
variable partition

Way 4:paging

Way 5:virtual paging

Way 6:segmentation

Way 7:virtual segmentation

ay 8:paged segmentation

Lecture Series (starting from November 8, 1998)

Chapter 7 Concurrent Process

Chapter 6 is the shortest and simplest, and also the most unimportant chapter. Chapter 7 is,
however, the longest and difficult and also the most important


7.1 Concurrency

7.1.1 Sequence and concurrency

Difference between sequence and concurrency: the former is for one process; the latter is for
multiple processes.

Two processes: may be related or non

7.1.2 Time
related error

For two relate
d processes: be careful for their interleaving execution otherwise time
errors would occur.


7.2 Mutual exclusiveness of process

7.2.1 Mutual exclusive and critical section

What is the reason for time
related errors? How to remove such rea
son so as to avoid the
related errors?

critical section and their interleaving.


to keep the critical sections mutually exclusive from each other.

Three conditions for the management of critical section:
page 213.

7.2.2 Management of
critical section

What measures can we take to provide the solution: mutual exclusiveness of critical sections.
Software measures/methods:
4 tries/examples, especially the last method, i.e., Patterson

Patterson Algorithm:
1) combination of 2

d 3


2) How to overcome the strict alternation of the 3


3) How to overcome the simultaneous waiting of 2


4) How to overc
ome the simultaneous entering of the 1


Hardware measures:



However all methods above have busy waiting problem. And thus we need PV primitives.

7.2.3 PV primitives

7.3 Synchrony of process

7.3.1 Synchronous mechanism

7.3.2 Use

PV primitives to keep synchrony

7.3.3 Monitor


Why we need monitor?

1) central and thus better management of concurrent control. 2)
more sophisticated mechanism to do concurrent control; monitor gives us a programmable


ge 229

A Typical example:

How to implement monitor:
Two levels of exclusiveness: 1) only one process can enter the
monitor at any time. 2) Shared resources are managed by the monitor so that processes
operate on them exclusively in their critic
al section. From another perspective, there are
three levels of blocking and unblocking: 1) check and release, 2) wait and signal, and, 3) W(s)
and R(s). Wait and signal are like PV primitives in the sense that they are all based on W(s)
and R(s), but to h
eed the subtle difference between them is very important for programming
by using them. The executor of Wait(s) must W(s) while the executor of P(s) don’t have to
W(s). Signal(s) is sometimes a null operation while V(s) is always s:= s+1.

7.3.4 Monitor
lementing: Hanson method

4 primitives in two pairs:

check and release; wait and signal.

Public date structures:

interf and s. “interf” is a semaphore to control which one process
can enter the monitor. “s” is a semaphore to control which one process can us
e the shared

Close investigation into 4 primitives:

check and release are for the 1

level of
exclusiveness, while wait and signal are for the 2

level of exclusiveness.


1) 2) 3) same as the book. 4) Difference between two kinds of semap
hores. Interf is
attached with other 2 counters while s is also a counter like the semaphore a in PV primitives.
5) All processes = Count1 + Count2 + s.

Students must completely digest such examples in order to understand really the
monitor. Actu
ally monitor is like a new programming style such as lisp and prolog. Only
after many exercises can they master such a new programming technique.



When a process calls the procedures in a monitor in order to enter its critical
section, its program

usually appears in two formats:




Critical section;



2nd format:


In the 1

format like in example 1 and 3 at
page 235
237 and page 240
242, the monitor
body and the critical section are two different exclusively
called areas, which are separately
managed by two semaphores, interf and s. In the 2

format like in example 2 at page
240, the monitor body and the

critical section are unified: once a process resides in its
critical section, it is also within the monitor body.

7.3.5 Monitor
implementing: Hoare method

It is omitted in our course. Just know that Hanson method is not the only way to implement a

7.4 Communication of process

7.4.1 Concept

Mutual exclusiveness and synchronization are some kinds of process communications, but
they are at low level based on the
shared memory
. High level of process communication is
needed in many cases in order to f
inish a task in a cooperatively fashion, in which message is
the unit of communication. High level of interprocess communication is usually by means of
message passing

which is especially important in distributed systems.

7.4.2 Direct communication

No buff
er, synchronous
and two primitives: send and receive.

7.4.3 Indirect communication

With buffer, asynchronous and still two primitives but with different parameters.

Close investigation into send( ) and receive( ):
like the buffer problem at page 226

it is also called as the producer
consumer problem with
message passing

lso note the
cyclic buffer management used in mailbox.

Management of mailbox:

by users or by os.

Hanson method:

mailbox buffer is managed by os.

Unix pipes:

pipe is some kind of unbounded mailbox.

7.4.4 Relates problems


problem: buffer:
1) no buffer, also called direct communication like in CSP, 2) bou
buffer and 3
) unbounded

buffer, completely asynchronous, no waiting at any time

such as
pipes in Unix.


problem: parallelism:
1) synchronous, and 2) asynchronous, of high parallelism but
with two more primitives.

7.5 Deadlock

7.5.1 Cause


why we need such ass
umptions: it is meaningful only under such

See different causes to deadlock: the number of resource, resource assignment,
process requirements, and execution scenario.

7.5.2 Deadlock prevention

Four necessary conditions of deadlock:
) mutual exclusive, 2) hold and wait, 3)
preemption, 4) cyclic waiting.

Some assignments to prevent deadlock:
1)static assignment, 2)acyclic assignment:
hierarchical and ordered. 2) is better than 1) in terms of efficiency of resource using.

7.5.3 Dead
lock avoidance

Difference between prevention and avoidance:
pessimistic and optimistic. Avoidance is
better than prevention in terms of efficiency.

Banker’s Algorithm:
assumptions, algorithms and key point: at least one process is
executable at any time.

.5.4 Deadlock testing:

Two tables:
at page 261.

Warshell algorithm for transitive closure.

Measures after testing a deadlock:
look at page 263.

7.5.5 Mixed policy

Omitted. Just know that prevention, avoidance or testing has its own pros and cons. And
fore we usually take different policies for different resources.

This section provides a formal verification system to prove that an operating system is safe
in the sense that deadlock never occurs at any time.

7.6 Concurrent programming

7.6.1 Languages

lations of os and compiler:
chicken and egg problem.


Concurrent Pascal, CSP, and Ada.


in interprocess communication.

7.6.2 Skills

Two main properties:
modularity and synchronization.

Concurrent Pascal as an example.

Process, Class, a
nd Monitor:
it is better if students can know the difference between
process and thread by knowing the difference between process and class (or monitor).
Thread is a concept in class or monitor.

A Detailed Example:
See how Fig. 7
16 reflects modularity and

synchronization: The
whole system program is divided into several modules; synchronization is managed by
special monitors.

Wording errors:

Exclusiveness should be exclusion.

Course Review

Chapter 1

What is operating system, and its main functions?

fication of operating systems?

Batch system?

Chapter 2: processor managemnet


interrupt device and interrupt handler, PSW.

Supervision/kernel mode and user mode.

Classification of interrupt in terms of interrupt source.


what is
it and why is it?

properties, PCB, states and mutual transition and scenarios.

Chapter 3:
Memory management

MM functions.

MM methods.

static and dynamic

Virtual memory:

Page scheduling:


Chapter 4

File organization:
cal and logical: methods.

Access method:

sequential, direct, index and

Storage media: seek, locate, rotate and transfer.

ourse Re



too many occurrence of IBM 360, tape
and magnetic

, console


Delete device management and job management.


new materials about CDROM
, scanner, picture


some advanced topics of OS such as equivalence of primitive, monitor and IPC, RPC

and stub procedure
, client
server model,


and some new algorithms in
distributed systems.


Combining pure theory and Practical OS system such as
. Either have a special
chapter to discuss
, or have a series of experiment classes devoted to


How do you think of the depth of present OS course? Somebody would think that it has
been deep and should be simplified by leaving diffi
cult materials to t
he graduate level.

’t agree. It is
a problem

kinds of
s we want to train
we want
cultivate more top
, the course should be further deepen instead.