Memory Management, Recent Systems

streambabySoftware and s/w Development

Dec 14, 2013 (3 years and 8 months ago)

247 views

Memory Management, Recent
Systems
• Paged Memory Allocation
• Demand Paging
Paged Memory Allocation
• Page Replacement Policies
• Segmented Memory Allocation
• Segmented/Demand Paged
Segmented Demand Paging
Memory Allocation
•Virtual Memory
Segmented/
Demand Paging
1Memory Management
• Early schemes were limited to storing entire program in
memory.
– Fragmentation.
Problems
– Overhead due to relocation.
• More sophisticated memory schemes now that:
– Eliminate need to store programs contiguously.
– Eliminate need for entire program to reside in memory
during execution.
2More Recent Memory Management
Schemes
• Paged Memory Allocation
• Demand Paging Memory Allocation
• Segmented Memory Allocation
• Segmented/Demand Paged Allocation
3Paged Memory Allocation
• Divides each incoming job into pages of equal size.
• Works well if page size = size of memory block size
(page frames) = size of disk section (sector, block).
Before executing a program, memory manager:
1. Determines number of pages in program.
2. Locates enough empty page frames in main memory.
3. Loads all of the program’s pages into them.
4Programs Are Split Into Equal-sized
Pages (Figure 3.1)
Page
Job 1 Main Memory frame #
st
1 100 lines Operating system 0
1
Page 0 2
3
nd
2 100 lines 4
Page 1 Job 1 – Page 2 5
6
7
rd
3 100 lines Job 1 – Page 0 8
Page 2 9
Job 1 – Page 1 10
Remaining 50 lines Page 3 Job 1 – Page 3 11
Wasted space 12
5Job 1 (Figure 3.1)
At compilation time every job is divided into pages:
– Page 0 contains the first hundred lines.
– Page 1 contains the second hundred lines.
– Page 2 contains the third hundred lines.
– Page 3 contains the last fifty lines.
• Program has 350 lines.
– Referred to by system as line 0 through line 349.
6Paging Requires 3 Tables to Track a
Job’s Pages
1. Job table (JT) - 2 entries for each active job.
– Size of job & memory location of its page map table.
– Dynamic – grows/shrinks as jobs loaded/completed.
2. Page map table (PMT) - 1 entry per page.
– Page number & corresponding page frame memory
address.
– Page numbers are sequential (Page 0, Page 1 …)
3. Memory map table (MMT) - 1 entry for each page frame.
– Location & free/busy status.
7Job Table Contains 2 Entries for Each
Active Job (Table 3.1)
Job Table (a) Job Table (b) Job Table (c)
Job Size PMT Location Job Size PMT Location Job Size PMT Location
400 3096 400 3096 400 3096
200 3100 700 3100
500 3150 500 3150 500 3150

(a) JT has 3 entries initially; one for each job in process
(b) second job ends, entry in table is released and replaced by
(c) information about next job that is processed
8Job 1 Is 350 Lines Long & Divided
Into 4 Pages (Figure 3.2)
9Displacement (Figure 3.2)
• Displacement (offset) of a line -- how far away a line is
from the beginning of its page.
– Used to locate that line within its page frame.
– Relative factor.
• For example, lines 0, 100, 200, and 300 are first lines for
pages 0, 1, 2, and 3 respectively so each has displacement
of zero.
10To Find the Address of a Given
Program Line
• Divide the line number by the page size, keeping the
remainder as an integer.
Page number
Page size line number to be located
xxx
xxx
xxx
Displacement
11Address Resolution
Each time and instruction is executed or a data value is
used, the OS or (hardware) must:
– Translate the job space address (relative).
– Into a physical address (absolute).
12Pros & Cons of Paging
• Allows jobs to be allocated in non-contiguous memory
locations.
– Memory used more efficiently; more jobs can fit.
• Size of page is crucial (not too small, not too large).
• Increased overhead occurs.
• Reduces, but does not eliminate, internal fragmentation.
13Demand Paging
• Bring a page into memory only when it is needed, so less
I/O & memory needed.
– Faster response.
• Takes advantage that programs are written sequentially so
not all pages are necessary at once. For example:
– User-written error handling modules.
– Mutually exclusive modules.
– Certain program options are either mutually exclusive
or not always accessible.
– Many tables assigned fixed amount of address space
even though only a fraction of table is actually used.
14Demand Paging - 2
• Demand paging made virtual memory widely available.
– Can give appearance of an almost-infinite or nonfinite
amount of physical memory.
• Requires use of a high-speed direct access storage device
that can work directly with CPU.
• How and when the pages are passed (or “swapped”)
depends on predefined policies that determine when to
make room for needed pages and how to do so.
15Tables in Demand Paging
• Job Table.
• Page Map Table (with 3 new fields).
1. Determines if requested page is already in memory.
2. Determines if page contents have been modified.
3. Determines if the page has been referenced recently.
– Used to determine which pages should remain in main
memory and which should be swapped out.
• Memory Map Table.
16Page Map Table
Page Status bit Referenced bit Modified bit Page frame
01 1 1 5
11 0 0 9
21 0 0 7
31 1 0 12
17Hardware Instruction
Processing Algorithm
1. Start processing instruction
2. Generate data address
3. Compute page number
4. If page is in memory
Then
get data and finish instruction
advance to next instruction
return to step 1
Else
generate page interrupt
call page fault handler
18Page Fault Handler Algorithm
1. If there is no free page frame
Then Select page to be swapped out using page removal algorithm
Update job’s page map table
If content of page had been changed then
Write page to disk
End if
End if
2. Use page number from step 3 from the hardware instruction processing
algorithm to get disk address where the requested page is stored.
3. Read page into memory.
4. Update job’s page map table.
5. Update memory map table.
6. Restart interrupted instruction.
19Thrashing Is a Problem
With Demand Paging
• Trashing – an excessive amount of page swapping back
and forth between main memory and secondary storage.
– Operation becomes inefficient.
– Caused when a page is removed from memory but is
called back shortly thereafter.
– Can occur across jobs, when a large number of jobs are
vying for a relatively few number of free pages.
– Can happen within a job (e.g., in loops that cross page
boundaries).
• Page fault – a failure to find a page in memory.
20Page Replacement Policies
• Policy that selects page to be removed is crucial to system
efficiency.
– Selection of algorithm is critical.
• First-in first-out (FIFO) policy* – best page to remove is
one that has been in memory the longest.
• Least-recently-used (LRU) policy* – chooses pages least
recently accessed to be swapped out.
• Most recently used (MRU) policy.
• Least frequently used (LFU) policy.
* Most well known policies
21FIFO policy.
When program calls for Page C, Page A is moved out of 1st page
frame to make room for it (solid lines). When Page A is needed again,
it replaces Page B in 2nd page frame (dotted lines).
Page Frame 1
Requested Pages
Page C
Page A
Page B
Page D
Page B
Page Frame 2
Page A
Swapped Pages
Page C
Page A
Page D
Page B
22How each page requested is swapped into 2 available page frames
using FIFO. When program is ready to be processed all 4 pages are
on secondary storage. Throughout program, 11 page requests are
issued. When program calls a page that isn’t already in memory, a
page interrupt is issued (shown by *). 9 page interrupts result.
Page
Requested: A B A C A B D B A C D
Page
Frame 1
A A C C B B B A A D
Page A
Page
Frame 2
B B B A A D D D C C
(empty)
Interrupt: * * * * * * * * *
Time: 1 2 3 4 5 6 7 8 9 10 11
23FIFO
• High failure rate shown in previous example caused by:
– limited amount of memory available.
– order in which pages are requested by program (can’t
change).
• There is no guarantee that buying more memory will
always result in better performance (FIFO anomaly or
Belady's anomaly).
24LRU Policy
For program in Figure 3.8. Throughout the program 11 page requests
are issued, but they cause only 8 page interrupts.
Page
Requested: A B A C A B D B A C D
Page
Frame 1
A A A A A D D A A D
Page A
Page
Frame 2
B B C C B B B B C C
(empty)
Interrupt: * * * * * * * *
Time: 1 2 3 4 5 6 7 8 9 10 11
25
LRU
• The efficiency of LRU is only slightly better than with
FIFO.
•LRU is a stack algorithm removal policy – increasing
main memory causes either a decrease in or same number
of page interrupts.
– LRU doesn’t have same anomaly that FIFO does.
26Mechanics of Paging :
Page Map Table
• Status bit indicates if page is currently in memory or not.
• Referenced bit indicates if page has been referenced recently.
– Used by LRU to determine which pages should be swapped out.
• Modified bit indicates if page contents have been altered
– Used to determine if page must be rewritten to secondary storage
when it’s swapped out.
Page Status bit Referenced bit Modified bit Page frame
0 1 1 1 5
1 1 0 0 9
2 1 0 0 7
3 1 1 0 12

27Four Possible Combinations of
Modified and Referenced Bits
Modified Referenced Meaning
Case 1 0 0 Not modified AND not referenced
Case 2 0 1 Not modified BUT was referenced
Case 3 1 0 Was modified BUT not referenced
(impossible?)
Case 4 1 1 Was modified AND referenced
28Page Replacement : The Working Set
• Working set – set of pages residing in memory that can be
accessed directly without incurring a page fault.
– Improves performance of demand page schemes.
• Locality of reference occurs with well-structured
programs.
– During any phase of its execution program references only a small
fraction of its pages.
• System must decide:
– How many pages comprise the working set?
– What’s the maximum number of pages the operating system will
allow for a working set?
29Pros & Cons of Demand Paging
• First scheme in which a job was no longer constrained by
the size of physical memory (virtual memory).
• Uses memory more efficiently than previous schemes
because sections of a job used seldom or not at all aren’t
loaded into memory unless specifically requested.
• Increased overhead caused by tables and page interrupts.
30Segmented Memory Allocation
• Based on common practice by programmers of structuring
their programs in modules (logical groupings of code).
–A segment is a logical unit such as: main program,
subroutine, procedure, function, local variables, global
variables, common block, stack, symbol table, or array.
• Main memory is not divided into page frames because size
of each segment is different.
– Memory is allocated dynamically.
31Segment Map Table (SMT)
• When a program is compiled, segments are set up
according to program’s structural modules.
• Each segment is numbered and a Segment Map Table
(SMT) is generated for each job.
– Contains segment numbers, their lengths, access rights,
status, and (when each is loaded into memory) its
location in memory.
32Tables Used in Segmentation
• Memory Manager needs to track segments in memory:
1. Job Table (JT) lists every job in process (one for whole
system).
2. Segment Map Table lists details about each segment (one
for each job).
3. Memory Map Table monitors allocation of main memory
(one for whole system).
33Pros & Cons of Segmentation
• Compaction.
• External fragmentation.
• Secondary storage handling.
• Memory is allocated dynamically.
34Segmented/Demand Paged
Memory Allocation
• Evolved from combination of segmentation and demand
paging.
– Logical benefits of segmentation.
– Physical benefits of paging.
• Subdivides each segment into pages of equal size, smaller
than most segments, and more easily manipulated than
whole segments.
• Eliminates many problems of segmentation because it uses
fixed length pages.
354 Tables Are Used in
Segmented/Demand Paging
1. Job Table lists every job in process (one for whole
system).
2. Segment Map Table lists details about each segment (one
for each job).
– E.g., protection data, access data.
3. Page Map Table lists details about every page (one for
each segment).
– E.g., status, modified, and referenced bits .
4. Memory Map Table monitors allocation of page frames in
main memory (one for whole system).
36Pros & Cons of Segment/Demand
Paging
• Overhead required for the extra tables
• Time required to reference segment table and page table.
• Logical benefits of segmentation.
• Physical benefits of paging
• To minimize number of references, many systems use
associative memory to speed up the process.
37Virtual Memory (VM)
• Even though only a portion of each program is stored in
memory, virtual memory gives appearance that programs
are being completely loaded in main memory during their
entire processing time.
• Shared programs and subroutines are loaded “on demand,”
reducing storage requirements of main memory.
• VM is implemented through demand paging and
segmentation schemes.
38Comparison of VM With Paging and
With Segmentation
Virtual memory with paging Virtual memory with segmentation
Allows internal fragmentation within Doesn’t allow internal fragmentation
page frames
Doesn’t allow external fragmentation Allows external fragmentation
Programs are divided into equal- Programs are divided into unequal-
sized pages sized segments
Absolute address calculated using Absolute address calculated using
page number and displacement segment number and displacement
Requires PMT Requires SMT

39Advantages of VM
• Works well in a multiprogramming environment because most
programs spend a lot of time waiting.
• Job’s size is no longer restricted to the size of main memory (or the
free space within main memory).
• Memory is used more efficiently.
• Allows an unlimited amount of multiprogramming.
• Eliminates external fragmentation when used with paging and
eliminates internal fragmentation when used with segmentation.
• Allows a program to be loaded multiple times occupying a different
memory location each time.
• Allows the sharing of code and data.
• Facilitates dynamic linking of program segments.
40Disadvantages of VM
• Increased processor hardware costs.
• Increased overhead for handling paging interrupts.
• Increased software complexity to prevent thrashing.
41
Key Terms
• address resolution •page
• associative memory •page fault
• demand paging • page fault handler
• displacement • page frame
• FIFO anomaly • Page Map Table (PMT)
• first-in first-out (FIFO) policy • page replacement policy
•Job Table (JT) • page swap
• least-recently-used (LRU) • paged memory allocation
policy
• reentrant code
• locality of reference
•segment
• Memory Map Table (MMT)
• Segment Map Table (SMT)
42Key Terms - 2
• segmented memory allocation
• segmented/demand paged
memory allocation
• thrashing
• virtual memory
• working set
43