Memory Management

clippersdogheartedSoftware and s/w Development

Dec 14, 2013 (3 years and 6 months ago)

61 views



Memory Management
One of the most important OS jobs.
Review memory hierarchy – sizes, speed, cost
Growing memories -> growing programs
need for swapping and paging
The simplest way to use memory is to

have one program in memory sharing

it with the OS. No swapping or paging.
We want multiprogramming – have more

than one program in memory at a time.


First idea is to divide memory into

fixed sized chunks (partitions).
The partitions may be equal in size or

be of different sizes.
A program is given one of the slots and

the rest of the partition is wasted.
Put it into smallest chunk it will fit into.
Single queue of all processes is better

than a queue for each size.
Memory Management


FIFO vs. pick largest job to not waste

memory (poor for small jobs).
CPU utilization = 1 – p^n

(p = time process waits for I/O,

n = degree of multiprogramming).
More processes = more efficient use

of cpu.
Memory Management


Absolute (real) memory address vs.

relative memory address.
Relocation problem.
Programs start at address 0 + have

offset.
Also must protect address space from

other programs.
Memory Management


If there isn't enough RAM to hold all

the processes, secondary memory

+ swapping can be used.
With swapping whole processes move

back and forth between RAM and disk.
May be brought back into a different

place in memory.
Use variable length partitions for better

memory utilization.
May get many small holes in memory

- do memory compaction (slow)
Memory Management


How much memory should be allocated

to a process?

- Fixed size – give exact amount needed

- Growth and shrinking

- grow into a hole, or have to be

moved or swap somebody out.

- allocate some space to allow growth.
Need a way to keep track of allocated

memory (+ to which process) + free

memory.
Memory Management


Bit maps are one way to keep track of

allocated and free memory.

- A 1 indicates used and a 0 unused.

- Each bit represents one allocation

unit (4 bytes, 8 bytes).
Linked lists are another way to keep

track of memory.
Memory Management


Need an algorithm to decide which hole

to give to a process.
Four fit algorithms.

-
First Fit
– give process first hole in list

that is big enough. Break it into two

parts (process and a smaller hole).
Fast
.

-
Next Fit
– Similar to First fit except

instead of searching from beginning

start where left off.
Performs slightly

worse than first fit
.
Memory Management



-
Best Fit
– Find the smallest hole that

will work.
Slower than first fit
. Wastes

more memory because creates small

holes.

-
Worst Fit
– Use biggest hole and

leave the biggest possible new hole.

Doesn't work well in simulation.
Another idea is
Quick Fit
. Have lists of more

commonly requested sizes. Can find a good size

hole fast. Merging free memory back is hard.
Memory Management


Data Structure Thoughts

- One list with both processes and holes

- slower to find a hole

- faster to free up memory.
- Two lists – one for processes and one

for holes.

- can order hole list and speed up

best fit.

- harder to reallocate.
Any other ideas?
Memory Management


Memory blocks are available of size

2
k
, L <= k <= U

2
L
= smallest size block allowed

2
U
= largest block allocated

(entire available memory)
Start by considering available memory as

one block. Then break it down into two

buddies of equal size. Continue until

smallest block needed for process is

created or found. (think binary search).
The Buddy System


When memory is released empty

buddies can be merged back into a

single block. Two free blocks can only

be combined if they are buddies, and

buddies have addresses that differ only

in one bit. Two one-byte blocks are

buddies iff they differ in the last bit, two

two byte blocks are buddies iff they

differ in the second to last bit and

so on.
(Go over an example)
The Buddy System


Buddy System Example
1 MB
Start off with 1 MB to manage
A=128k, 128k free, 256k free, 512k free
B requests 240k
A requests 100k
C requests 64k
A=128k, 128k free, B=256k, 512k free
A=128k, C=64k, 64k free, B=256k, 512k free


Buddy System Example
A=128k,C=64k, 64k free, B=256k,D=256k, 256k free

Release B
A=128k,C=64k, 64k free,256k free,D=256k, 256k free

Release A
128k free,C=64k,64k free,256k free,D=256k,256k free

E requests 75k
E=128k,C=64k,64k free,256k free,D=256k,256k free
D requests 256k


Buddy System Example
Release C
E=128k,128k free,256k free, D=256k,256k free
Release E
512k free, D=256k,256k free
Release D
1 MB


Buddy System
Internal and External Fragmentation

129k example
Good compromise to overcome

disadvantages of fixed and

variable partitioning.
Used in Linux some for kernel memory


Buddy System in Linux
Uses 11 lists of so many contiguous

page frames

- 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024
1024 page frames = 4MB
Allocates page frames instead of bytes
To merge buddies

- both are size n and contiguous

- physical addr of first frame of each is a

multiple of 2*2*2^12.


Slab Allocator
Jeff Bonwick
First appeared in Solaris 2.4
Used by Linux
User processes get pages filled with zeroes

- Linux: get_zeroed_page()
View memory as objects

- have a constructor and a destructor


Slab Allocator
Saves objects that have been released

for later use (no reinit. Needed)
Kernel can reuse areas quickly when

Processes are created/destroyed (cache)
Areas can be grouped by frequency

- have standard sizes
Each call to the buddy system “dirties” cache
Slab allocator groups objects into caches
A slab is one or more contiguous page frame


Linux memory
Kernel is highest priority

- if it requests memory then it must need it

- the kernel trusts itself

- assume error-free
User process memory requests not as urgent
User programs cannot be trusted
Kernel must prepare to catch all addressing

errors.
Uses brk() system call to change size


Linux memory
A list is not efficient for searching by

memory address
Linux 2.6 uses red-black trees to store

memory descriptors