Ad

Wednesday, September 30, 2015

FIFO- Page Replacement Algorithms


Program for FIFO PAge Replacement Algorithm

#include<stdio.h>
#include<conio.h>
void print(int a[],int n)
   {
   int i;
   for(i=0;i<n;i++)
    {   printf(" %d ", a[i]);}
   printf("\n");
   }
int search(int e,int a[],int n)
   {
   int i;
   for(i=0;i<n;i++)
      {
      if(a[i]==e)
     {return i;}
      }
   return -1;
   }
int count(int a[],int n)
   {
   int count=0,i;
   for(i=0;i<n;i++)
      {
      if(a[i]!=-1){count++;}
      else{break;}
      }
   return count;
   }
void main()
   {
   int prs[30],mm[10]={-1,-1,-1,-1,-1,-1,-1,-1,-1,-1},prss,nf,i,j,loc,c,fi;
   int pf=0;
   clrscr();
   printf("enter Main Memory Size(no. of frames):");
   scanf("%d",&nf);
   printf("enter page ref stream size:");
   scanf("%d",&prss);
   printf("enter page ref stream:");
   for(i=0;i<prss;i++)
      {
      scanf("%d",&prs[i]);
      }
   //FIFO algo.
   for(i=0,fi=0;i<prss;i++)
      {
       loc=search(prs[i],mm,nf);
       if(loc==-1)
     {
       printf("-Page fault-");
       pf++;
        if(i<nf){loc=i;}
        else
          {
          loc=fi;
          }
         mm[loc]=prs[i];
         if(fi<nf&&i>=nf){fi=loc+1;}
         if(fi==nf){fi=0;}
     }
      print(mm,nf);

      }
   printf("\nTotal No. of page faults:%d",pf);
   getch();
   }
OUTPUT

Friday, September 25, 2015

Main Memory - ppt link & paging doc

Main Memory

Download 
·        NO External Fragmentation
·        But Internal Fragmentation exists
o   Page size : 4 bytes
o   Program size : 25 bytes
o   No. of pages : 6 (24 bytes) + 1 ( 1byte)
o   Internal fragmentation : 4-1=3 bytes
·        Expected Fragmentation : On average, one half page per process
·        When Page size is small then overhead is more
·        Multiple page sizes
o   Ex:- Solaris
·        Research : variable on-the-fly page size
·        User program view s memory as one single space but, in fact, user program is scattered throughout physical memory.
·        Frame Table: Data structure, which frames are allocated, which frames are available and total no. of frames.


Hardware Support:
·        Most OS maintains a page table per process.
·        Pointer to page table is stored in process control block along with other register values.
·        Hardware Implementation of page table
o   Page Table is implemented as set of registers (problematic when page table is large)
o   Alternative, page table is kept in main memory and PTBR(Page Table Base Register) points to page table
§  Problem –2 memory accesses are needed to access a byte
§  memory access is slowed down by a factor of 2
o   Standard solution – TLB – Translation Look-aside Buffer
o   TLB is associative, high speed memory.
o   Each entry consists 2 parts:
§  Key or tag
§  Value

·        TLB HIT and TLB MISS
·        Associative memory – parallel search

          Address translation (p, d)
§  If p is in associative register, get frame # out
§  Otherwise get frame # from page table in memory
·        Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each process to provide address-space protection for that process
·        If the TLB does not support separate ASIDs, then every time a new page table is selected (for context switch), TLB must be flushed or erased.
·        HIT RATIO
o   Percentage of times that a particular page No. is found in the TLB
o   Example
§  Hit ratio=80%
·        TLB HIT
o   To search in TLB =20 nanoseconds
o   To access memory = 100 nanoseconds
o   Memory mapped access = 120 nanoseconds
·        For TLB MISS
o   To search = 20ns
o   To get page no. & frame no = 100 ns
o   To access = 100 ns
o   Total = 20+100+100=220ns
·        Effective Access Time = 0.80X120+0.20X220=140 ns
·        We suffer 40% slow down in memory access-time( from 100 to 140)
§  Hit ratio=98%
·        Effective Access Time = 0.98X120+0.20X220=122 ns
o   Increased hit rate produces only 22% slowdown in access time
Protection
·        In paging, memory is protected by protection bits associated with each frame.
·        Protection bits are kept in page table.
·        For every memory reference, during translation (L to P), protection bits can be checked to verify that no writes are being made to a read-only page.(if YES, trap)
·        Separate h/w can be provided for read-only, read-write or execute-only by providing separate protection bits for each kind of access.
·        In general -> one additional bit is attached to each entry in page table:
Valid – Invalid Bit
·        Invalid – the page is not in the process’s logical address space.
·        Example
o   14-bit address space => 0 to 16383
o   Program addresses => 0 to 10468
o   Page size = 2 KB
o   Any address reference to pages 6 or 7 is invalid and causes a trap.
o   According to program=> addr. After 10468 is invalid.
o   According to paging => addr. After 10468 to 12287 are valid … because page 5 ref. are valid and it is internal fragmentation.
·        Some systems provide h/w for page-table length register (PTLR) to indicate size of the page table. To check whether or not  the address is in the valid range for the process.


Shared Pages:

·        Advantage of paging: possibility of sharing code.
·        Mainly in time sharing systems
o   Example : 40 users using same program with re-entrant code=150KB & data=50KB
o   Total requirement : 8000 KB
o   Re-entrant code or pure code : never changes during execution time
·        In paging,
o   Code will be same for 40 users-150KB- same shared pages ed1,ed2,ed3
o   Data is different – 50 KB X 40 =>2000KB
o   Total : 2150KB (instead of 8000KB)
·        Note :- Heavily used programs like editor, compilers, run-time libraries, DB systems can be shared.



CONTIGUOUS MEMORY ALLOCATION

CONTIGUOUS MEMORY ALLOCATION
Contiguous Memory Allocation
·         Memory is divided into 2 partitions:
1.      OS
2.      User Process
·         OS can be placed either in low memory or high memory. It is based on location of Interrupt vector which is in general in low memory.
·         Memory Allocation refers to the way of allocating memory to multiple processes which are input queue and waiting to be brought into memory.
·         In Contiguous memory allocation, each process is contained in a single contiguous section of memory.
Memory mapping and protection
·         Relocation Register & Limit Register are used for memory mapping and protection.
·         Relocation Register – Smallest Physical address (Base or starting address)
·         Limit Register – range of logical addresses ( offset)
·         MMU maps logical address dynamically by adding value in relocation register and resultant address is sent to memory.
·         Protection – Every address produced by CPU is checked with Relocation & Limit Register (OS & data can be protected by being modified by user process)
·         Dynamic change in OS size can be obtained by using relocation register
o   For example, if os has code & device driver buffer space and the device driver is not in use then that space can be used for other purpose by reducing the size of OS (transient OS code).

Memory Allocation
·         Multiple partition Method:
o   Fixed Size partitions:
§  Each Partition has exactly one process.
§  Degree of multi-programming – bound to - No. of partitions
§  When a partition is free, a process is selected from the input queue and is loaded into free partition.
§  When the process terminates, the partition becomes available for another process.
·         Called as MFT in IBM OS/360 OS.
·         This restricts both the number of simultaneous processes and the maximum size of each process, and is no longer used.
o   Variable Size Partitions
·         Keeps a table of unused (free) memory blocks (holes), and to find a hole of a suitable size whenever a process needs to be loaded into memory.
·         Initially – all memory is available – large block of available memory – hole.
·         Later – memory contains a set of holes of variable sizes.
·         Cpu scheduling algo. – input queue – allocate when free required size hole is available – if not wait.
·         Large hole can be splitted into 2 parts – allocate one and add other to hole set.
·         Merge 2 holes when new hole is adjacent to old hole.
·          There are many different strategies for finding the "best" allocation of memory to processes, including the three most commonly discussed:
1.      First fit - Search the list of holes until one is found that is big enough to satisfy the request, and assign a portion of that hole to that process. Whatever fraction of the hole not needed by the request is left on the free list as a smaller hole. Subsequent requests may start looking either from the beginning of the list or from the point at which this search ended.
2.      Best fit - Allocate the smallest hole that is big enough to satisfy the request. This saves large holes for other process requests that may need them later, but the resulting unused portions of holes may be too small to be of any use, and will therefore be wasted. Keeping the free list sorted can speed up the process of finding the right hole.
3.      Worst fit - Allocate the largest hole available, thereby increasing the likelihood that the remaining portion will be usable for satisfying future requests.
·         Simulations show that either first or best fit are better than worst fit in terms of both time and storage utilization. First and best fits are about equal in terms of storage utilization, but first fit is faster.
8.3.3. Fragmentation
  • External Fragmentation exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous.
  • Worst case – block of free memory between every 2 processes and if these small pieces are one block instead then we might be able to run several more processes.
  • Both best-fit and first-fit suffer with external fragmentation.
  • 50-percent Rule:
    • Statistical analysis of first-fit says for any kind of optimization out of given N allocated blocks another 0.5N blocks will be lost for fragmentation.
  • Internal fragmentation – unused memory that is internal to a partition.
    • When memory is partitioned into fixed-size blocks and allocate memory in units based on block size then memory allocated to a process may be slightly larger than the requested memory.
  • COMPACTION - Solution for External Fragmentation
    • Shuffle the memory contents so as to place all free memory together in one large block.
    • Possible only when binding is during execution time (dynamic)
  • Another solution (paging and segmentation) is to allow processes to use non-contiguous blocks of physical memory, with a separate relocation register for each block.