lecture notes - Rensselaer Polytechnic Institute

hungryhorsecabinΛογισμικό & κατασκευή λογ/κού

14 Δεκ 2013 (πριν από 3 χρόνια και 8 μήνες)

83 εμφανίσεις

Rensselaer Polytechnic Institute

CSCI
-
4210


Operating Systems

David Goldschmidt, Ph.D.

very small

very large

very fast

very slow

volatile

non
-
volatile


Based on the
von Neumann architecture
,
data and program instructions

exist in physical memory


Repeatedly perform

fetch
-
decode
-
execute

cycles


The
execute

part

often results in
data

fetch

and
store

operations


Locations in memory

are identified by

memory addresses



When compiled, programs

consist of
relocatable code



Other compiled modules

also consist of

relocatable code


At
load time
, any

additional libraries

also consist of

relocatable code



At
run time
, memory

addresses of all object

files are mapped to a

single memory space

in
physical memory



Using
dynamic loading
, external libraries are
not loaded when a process starts


Libraries are stored on disk in
relocatable

form


Libraries loaded into memory only when needed



Using
dynamic linking
, external libraries can
be preloaded into shared memory


When a process calls a library function, the

corresponding physical address is determined


Main memory

is
partitioned

and allocated

to resident

operating system

and user processes

fixed partitioning scheme


A pair of
base

and
limit

registers define the

logical address space



Also known as

relocation registers



The CPU generates logical memory addresses


A
Memory
-
Management Unit

(
MMU
)

maps logical memory addresses

to the
physical address space








User programs never see

physical memory addresses


Hardware protects against memory access
outside of a process’s valid memory space


Variable
-
length or
dynamic

partitions:


When a new process enters the system, the
process is allocated to a single contiguous block









The operating system maintains a list of
allocated partitions and free partitions

OS

Process 5

Process 8

Process 2

OS

Process 5

Process 2

OS

Process 5

Process 2

Process 9

OS

Process 5

Process 9

Process 2

Process
1


How can we place new process P
i

in memory?


First
-
fit algorithm
: allocate the first free block

that’s large enough to accommodate P
i



Best
-
fit algorithm
: allocate the

smallest free block that’s large

enough to accommodate P
i



Next
-
fit algorithm
: allocate the

next free block, searching from last allocated block


Worst
-
fit algorithm
: allocate the largest free block

that’s large enough to accommodate P
i



Memory is wasted due to
fragmentation
,

which can cause performance issues


Internal fragmentation

is wasted memory

within

a partition or process memory


External fragmentation

can reduce

the number of runnable processes


Total memory space exists to satisfy

a memory request, but memory is

not contiguous

OS

Process 5

Process 8

Process 2

Process 3

Process 6

Process 12

Process 7

Process 9

Process 3

Process 6

Process 12

Process 7


Reduce external fragmentation by

compaction

or
defragmentation



Rearrange memory contents to organize

all free memory blocks together into

one large contiguous block


Compaction is possible only if

relocation is dynamic and is

done at execution time


Compaction is expensive

OS

Process 5

Process 8

Process 2

Process 9

Process 3

Process 6

Process 12

Process 7

Process 3

Process 6

Process 12

Process 7

Process 9


A
noncontiguous memory allocation

scheme
avoids the external fragmentation problem


Slice up physical memory into

fixed
-
sized blocks called
frames



Sizes are powers of 2 (e.g. 2
14
)


Slice up logical memory into

fixed
-
sized blocks called
pages



Allocate pages into frames


Note that frame size equals page size


When a process of size
n

pages is ready to
run, operating system finds
n

free frames



The OS keeps

track of pages

via a
page table


main memory

process P
i

== in use

== free











Page tables

map logical memory

addresses to physical memory

addresses


Example process
P
i


needs 16MB of

logical memory


Page size is 4MB


Logical memory is

mapped to a 32MB

physical memory


Frame size is 4MB


binary


0 ==> 000000


4 ==> 000100


8 ==> 001000

12 ==> 001100

16 ==> 010000

20 ==> 010100

24 ==> 011000

28 ==> 011100


Every logical address is

sliced into two distinct

components:


Page number

(
p
): used as an index into the

page table to obtain the base physical memory
address


Page offset

(
d
): combined with the base address
to identify the physical memory address

page number

page offset

p

d


Covers a logical address

space of size 2
m

with

page size 2
n


page number

page offset

p

d

(m


n)

(n)

1

2


The page table is in main memory


Every memory access request actually requires
two memory accesses:


Use page table

caching

at the

hardware level

to speed address

translation



Hardware
-
level

translation look
-
aside buffer

(
TLB
)


Given:


Memory access time is 100 nanoseconds


TLB access time is 20 nanoseconds


TLB hit ratio

is 80%



The
effective memory
-
access time

(
EMAT
) is


0.80
x

120 ns + 0.20
x

220 ns = 140 ns


What is the effective memory
-
access time

given a hit ratio of 99%? 50%?


For large page tables, use

multiple page table levels


Slice up the logical address

into multiple page indicators


Processes in the
ready queue have
memory images
waiting on disk


Processes are
swapped

in and

out of memory


Can suffer from slow
data transfer times