Documente Academic
Documente Profesional
Documente Cultură
Managing processes in the main memory and moving data in and out to/from secondary memory
1
Responsibilities
Organization of main memory Allocation of memory to processes Dynamic relocation Sharing Protection
Memory Management
Ideally programmers want memory that is
large fast non volatile
Memory hierarchy
small amount of fast, expensive memory cache some medium-speed, medium price main memory gigabytes of slow, cheap disk storage Memory manager handles the memory hierarchy
3
Aims
Access time small Size Large Cost effective => Design Constraints Wasted memory Time complexity Memory access overhead
Initial approaches
Dynamic loading Dynamic linking / Shared libraries Overlays / Swapping
Virtual Memory
6
Monoprogramming
0xFFFF
Operating system (ROM) Device drivers (ROM) User program (RAM) Operating system (RAM)
Monoprogramming
Simple, no special hardware, no address translation Problems :
Poor CPU utilization in presence of I/O waiting Poor memory utilization with many processes Processes must fit into memory
Idea
Subdivide memory and run more than one process at once!!!! Multiprogramming, Multitasking
100K
0
Process
OS
100K
13
How many processes do we need to keep the CPU fully utilized? This will help determine how much memory we need
14
16
Dynamic Partitioning
B A OS
A OS
C B A
OS
C B
OS
C B
D OS
C
D OS
C
A D OS
Code Stack
Room for A to grow
Data
Code OS
18
Unresolved problems
1. Entire program in main memory 2. Maximum size < Physical memory size 3. Contiguous store Solutions 1 & 2 -> Secondary memory --logical extension of main memory 3 -> Map virtual address to discontinuous store
19
Virtual Memory
Problems:
Programs too big for main memory; Large programs limit degree of multiprogramming
Solution:
Keep only those parts of the programs in main memory that are currently in use
Basic idea:
A map between program-generated addresses (virtual address space) and main memory
CPU chip
MMU
the
Physical addresses on bus, in memory
Memory
Usually on the same chip as the CPU Only physical addresses leave the CPU/MMU chip
Disk controller
21
Paging
Physical memory conceptually divided into fixed sized blocks called page frames. User program also divided into set of blocks of same size called pages. Allocation of memory -> finding sufficient unused frames to load user program. Address translation mechanism to map virtual pages to frames facilitated by Page Table.
pages
6064K 5660K 5256K 4852K 4448K 4044K 3640K 3236K 2832K 2428K 2024K 1620K 1216K 812K 48K 04K
6 5 1
3 0 4 7
frames
2832K 2428K 2024K 1620K 1216K 812K 48K 04K
Physical memory
22
Dirty bit
Referenced bit
Valid bit
23
Set 1
Set 0
1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000
24
Set 10
Set 00
1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000
25
16 bytes divided into 8 sets, each of size 2 bytes Byte 12 : 10112 =>
Set : 101 Offset: 1
1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000
26
12 bits
p
f
d d
12 bits
3 bits
27
Page number
Index into page table Page table contains base address of page in physical memory
32-12 = 20 bits
12 bits
Page offset
Added to base address to get actual physical memory address
Page size =
2d
bytes
CPU
d 0 1 p-1 p p+1
. . .
. . .
f page table
physical memory
29
. . .
220 657
. . .
401
. . .
1st level page table
125 613
1st level page table has pointers to 2nd level page tables 2nd level page table has actual physical page numbers in it
. . .
961
. . . . . . . . . . . . . . . . . . . . . . . . . . . main memory
30
884 960
2nd level page tables
. . .
955
. . .
LH B1 B2
. . .
Admin
. . .
B5
. . . . . . . . . . . . . . . . . . . . . . . .
LG01
884 960
. . .
955
31
System characteristics p1 = 10 bits p2 = 9 bits offset = 13 bits 8 KB pages 32-bit logical address divided into 13 bit page offset, 19 bit page number Page number divided into: 10 bit page number 9 bit page offset Logical address looks like this: p1 is an index into the 1st level page table p2 is an index into the 2nd level page table pointed to by p1
32
physical address
0
1 p1
19
13
. . . . . .
0 1 p2
. . .
. . .
. . .
. . .
main memory
33
If desired logical page number is found, get frame number from TLB If desired logical page number isnt found
Get frame number from page table in memory Replace an entry in the TLB with the logical & physical page numbers from this reference
8 unused 2 3 12 29 22 7
3 1 0 12 6 11 4
Example TLB
35
Address translation
Page # Offset
TLB C
Virtual Address
yes
Page in Cache ?
i j k
a i j k
No Memory Violation
Demand Paging
Bring a page into memory only when it is needed.
Less I/O needed Less memory needed Faster response More users
42
Page Replacement
Page replacement completes separation between logical memory and physical memory large virtual memory can be provided on a smaller
physical memory.
Page-Replacement Algorithms
Want lowest page-fault rate. Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string.
44
4 2 3
4
4 1 3
5
4 1 2
6
5 1 2
7
5 3 2
8
5 3 4
9
1
2
1
2 3
1
2 3
5 1 3
5 1 2
5 1 2
4 1 2
4 5 2
2
3 4
5
4
1 2 3 4
4
6
4
7
3
8
3
9
3
10
Beladys Anomaly :
Optimal Algorithm
Replace page that will not be used for longest period of time. 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5, 2, 3
1 2 3 4 1 2 3 4 5 4
6 page faults
46
2
3 4
47
Allocating memory
Search through region list to find a large enough space Suppose there are several choices: which one to use? First fit: the first suitable hole on the list Next fit: the first suitable after the previously allocated hole Best fit: the smallest hole that is larger than the desired region (wastes least space?) Worst fit: the largest available hole (leaves largest fragment) Option: maintain separate queues for different-size holes Allocate 20 blocks first fit Allocate 12 blocks next fit 1 Allocate 13 blocks best fit Allocate 15 blocks worst fit 5 18
19 14
52 25
- 102 30
- 411 19
- 135 16
- 510 3
48
- 202 10
- 302 20
- 350 30 15
Shared Pages
Shared code Read-only (reentrant) code shared among processes Shared code appeared in same location in the physical address space Private code and data Each process keeps a separate copy of the code and data, (e.g., stack). Private page can appear anywhere in the physical address space. Copy on write Pages may be initially shared upon a fork They will be duplicated upon a write 49
External Fragmentation
process1
Cant fit Problem Total memory space exists to satisfy a request, but it is not contiguous. Solution Compaction: shuffle the memory contents to place all free memory together in one large block Relocatable code Expensive Paging: Allow noncontiguous logical-tophyiscal space mapping.
50
Shift up
process3
process2
Internal Fragmentation
Process0 Page 0 Process0 Page 1 Process0 Page 2 Partially used 0 Process1 Full pages! Page 0 Process1 Page 1 Process1 Page 0 Page 2 Partially used Process2 Page 1 Process2 Page 2 Partially used 0
Problem Logical space is not always fit to a multiplication of pages. (In other words, the last page has an unused portion.) Solution Minimizing page size Side effect: causes frequent page faults and TLB misses
51
Protection
By using PMTLR and PMTBR
Sharing
Make an entry of sharable page in every processes Page Map Table
52
Paging : Advantages
1. Does not require consecutive set of memory location to load the program in main memory. 2. Does not suffer from external fragmentation. 3. Utilization of memory is quite high. 4. Supports Multiprogramming.
53
Paging : Disadvantages
1. Wastage of memory due to per-process PMT and a system wide MMT 2. Internal fragmentation 3. Extra hardware is required to speed up the memory access operation.
54
More
4. Program viewed as linear array of bytes 5. User views program as collection of variable sized logical units 6. Sharing and protection physical not logical
Allocated
Source text Symbol table
In use
Solution: segmentation
Give each unit its own address space
56
Using segments
Each region of the process has its own segment Each segment can start at 0
Addresses within the segment relative to the segment start
Symbol table
12K 8K 4K
12K
Source text
8K 4K Constants
8K 4K 0K
Call stack
Segment 3
57
Segment 0
0K
Segment 1
0K
Segment 2
Paging
Segmentation
Yes Many Yes Yes Yes Yes
More address space Break programs into without buying more logical pieces that are memory handled separately.
58
Implementing segmentation
Segment 3 (10 KB) Segment 2 (4 KB) Segment 3 (10 KB) Segment 2 (4 KB) Segment 4 (7 KB) 5 KB free Segment 0 (6 KB) Segment 5 (6 KB) 4 KB free Segment 2 (4 KB) Segment 4 (7 KB) 5 KB free Segment 0 (6 KB) Segment 6 9 KB free (8 KB) Segment 5 (6 KB) Segment 2 (4 KB) Segment 4 (7 KB) Segment 0 (6 KB)
Segment 6 (8 KB)
Offset added to segments base address Result is a virtual address that will be translated by paging
62
63
Other benefits of VM
Shared Pages / Memory mapped files Copy On Write
64
65
Copy-On-Write
Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory. If either process modifies a shared page, only then is the page copied.
COW allows more efficient process creation as only modified pages are copied.
Free pages are allocated from a pool of zeroed-out pages.
66
Review
What is meant by memory management ? What are the responsibilities of memory manager ? What are the design constraints ? Difference between
physical memory and real memory Virtual and physical memory
Different techniques to track memory usage Different memory allocation policies Why is multiprogramming needed? What restricts the degree of multiprogramming ?
67
Review
Adv and Disadv and how to drawbacks of following techniques ?
Monoprogramming Fixed partitioning Dynamic partitioning
overcome
What is meant by virtual memory? Why is it needed? What is meant by internal and external fragmentation What is meant by compaction ?
68
Review
Paging
Basic principle Address translation Page table entry details Benefits and Drawbacks
What is meant by 2-level page table? How is it useful? What is meant by inverted page table? How is it useful?
What is meant by associative memory ?How is it useful? What are the hardware supports needed for paging? Benefits and drawbacks of big and small page sizes How is sharing and protection achieved in Paging? What is meant by PMT and MMT ? What is meant by demand paging, page fault, page replacement and Beladys anamoly?
69
Review
Segmentation
Basic principle Address translation Benefits and Drawbacks Paged segmentation, benefits , address translation How is sharing and protection achieved in Paging? Hardware support needed for segmentation
70
Review
Compare features of paging and segmentation What are the other benefits of virtual memory? What is meant by
Thrashing COW Memory mapped files
71
Numerical Problems
Memory allocation and Page replacement
72
Prob #1
Consider a page size of 100 bytes and the following memory address reference string: 120, 220,312, 423, 211, 115, 543, 653, 234, 167, 278, 190, 225, 321, 765, 666, 333, 222, 111, 249, 339, 666 How many page faults would occur for the following replacement algorithms, assuming 4 page frames? LRU replacement FIFO replacement Optimal replacement.
73
Prob #2
Consider a swapping system in which memory consists of the following hole sizes in memory order: 10K, 4K, 20K, 18K, 7K, 9K, 12K, and 15K. Which hole is taken for successive memory requests of 12K, 10K, 9K, 14K for the best fit, first fit and worst fit policies.
74
Prob #3
Given memory partitions of 100K, 500K, 200K, 300K, 150K and 600K (in order), how would each of the First-fit, Best-fit algorithms place the memory segments of processes of 156K, 120K 212K, 417K, 112K and 426K (in order)? Which algorithm makes most efficient use of memory?
75
END
76