Main Memory
Download
·
NO External Fragmentation
·
But Internal Fragmentation exists
o
Page size : 4 bytes
o
Program size : 25 bytes
o
No. of pages : 6 (24 bytes) + 1 ( 1byte)
o
Internal fragmentation : 4-1=3 bytes
·
Expected Fragmentation : On average, one
half page per process
·
When Page size is small then overhead is
more
·
Multiple page sizes
o
Ex:- Solaris
·
Research : variable on-the-fly page size
·
User program view
s memory as one single space but, in fact, user program is scattered
throughout physical memory.
·
Frame
Table: Data structure, which frames are allocated, which
frames are available and total no. of frames.
Hardware
Support:
·
Most OS maintains a page table per
process.
·
Pointer to page table is stored in
process control block along with other register values.
·
Hardware Implementation of page
table
o
Page Table is implemented as set of registers (problematic when page table is
large)
o
Alternative, page table is kept in
main memory and PTBR(Page Table Base
Register) points to page table
§ Problem
–2 memory accesses are needed to access a byte
§ memory
access is slowed down by a factor of 2
o
Standard solution – TLB – Translation Look-aside Buffer
o
TLB is associative, high speed memory.
o
Each entry consists 2 parts:
§ Key
or tag
§ Value
·
TLB HIT and TLB MISS
·
Associative memory – parallel search
Address
translation (p, d)
§ If
p is in associative register, get frame # out
§ Otherwise
get frame # from page table in memory
·
Some TLBs store address-space
identifiers (ASIDs) in each TLB entry – uniquely identifies each process to
provide address-space protection for that process
·
If the TLB does not support separate
ASIDs, then every time a new page table is selected (for context switch), TLB
must be flushed or erased.
·
HIT
RATIO
o
Percentage of times that a particular
page No. is found in the TLB
o
Example
§ Hit
ratio=80%
·
TLB HIT
o
To search in TLB =20 nanoseconds
o
To access memory = 100 nanoseconds
o
Memory mapped access = 120 nanoseconds
·
For TLB MISS
o
To search = 20ns
o
To get page no. & frame no = 100 ns
o
To access = 100 ns
o
Total = 20+100+100=220ns
·
Effective Access Time = 0.80X120+0.20X220=140
ns
·
We suffer 40% slow down in memory
access-time( from 100 to 140)
§ Hit
ratio=98%
·
Effective Access Time =
0.98X120+0.20X220=122 ns
o
Increased hit rate produces only 22%
slowdown in access time
Protection
·
In paging, memory is protected by
protection bits associated with each frame.
·
Protection bits
are kept in page table.
·
For every memory reference, during
translation (L to P), protection bits can be checked to verify that no writes
are being made to a read-only page.(if YES, trap)
·
Separate h/w can be provided for read-only, read-write or execute-only by providing separate protection bits for each kind of access.
·
In general -> one additional bit is
attached to each entry in page table:
Valid – Invalid Bit
·
Invalid – the page is not in the
process’s logical address space.
·
Example
o
14-bit address space => 0 to 16383
o
Program addresses => 0 to 10468
o
Page size = 2 KB
o
Any address reference to pages 6 or 7 is
invalid and causes a trap.
o
According to program=> addr. After
10468 is invalid.
o
According to paging => addr. After
10468 to 12287 are valid … because page 5 ref. are valid and it is internal
fragmentation.
·
Some systems provide h/w for page-table length register (PTLR) to indicate size of
the page table. To check whether or not
the address is in the valid range for the process.
Shared
Pages:
·
Advantage of paging: possibility of
sharing code.
·
Mainly in time sharing systems
o
Example : 40 users using same program
with re-entrant code=150KB & data=50KB
o
Total requirement : 8000 KB
o
Re-entrant code or pure code : never changes during execution time
·
In paging,
o
Code will be same for 40 users-150KB-
same shared pages ed1,ed2,ed3
o
Data is different – 50 KB X 40
=>2000KB
o
Total : 2150KB (instead of 8000KB)
·
Note
:- Heavily used programs like editor,
compilers, run-time libraries, DB systems can be shared.
No comments:
Post a Comment