Week 4 - Paging and Swapping



This week I learned a different approach which solves the fragmentation issues of address translations for virtualized memory. The solution would be to actually create fixed size chunks of memory. The idea is called paging, so instead of approaching it like segmentation where we divide an address space into chunks for the code, heap, and stack, this time we divide it into fixed size units called a page. To achieve paging the memory needs to be divided, the division happens to both the virtual address space and the physical memory. The virtual address space chunks will be called pages and the physical memory's are called frames. Again an address translation needs to happen but this time using the virtual addresses bits as a way to map and translate to a physical address. Each virtual address will have a virtual page number associated with determining the size of the virtual address, the pages, and the size of the page. On a similar note this also happens to the physical address. The offset bits will determine other important information about the page such as its validity, references and much more. A page table will be created from the MMU, which will use the virtual page number to index it to find the page frame number, the physical address will be formed from the page frame number associated with the virtual page number combined with the offset of the virtual address. Paging solves the fragmentation issues but raises even more problems such as space to store the page table since a page table needs to be created for each process and speed from mapping the page numbers.

In addition to this week I have learned how to approach solving the problems of space and speed for paging which involves the translation lookaside buffer or TLB. TLB solves the speed of address translation by acting as a cache, storing the most recently used mappings, and checks if the virtual page number matches the one in the TLB which already contains the physical frame number to get the physical address. If there is no match then it would have to actually access the page table, find the physical frame number, create a new mapping in the TLB, then it creates the physical address. This demonstrates the concept of locality since it can access the addresses that are accessed recently or close to each other. This may solve the speed issue but there is still a problem with space since the page table can be quite large with millions of entries. In order to solve this, multi level paging is considered. Multi-level paging introduces a hierarchical structure that makes it so you only need to allocate and store necessary pages. It creates chunks of the large page table, and utilizes a page directory to index it. The vpn’s first half of the bits can be used to index the page directory which contains the page table’s page frame number, from this we can use the other half of the vpn bits to identify the index of the page of that page frame number. Lastly a problem that occurs is what if there is no space in physical memory. In order to solve this we need to free up space in the memory by temporarily moving it into secondary storage. A swap space is dedicated on the disk and the OS prioritizes what needs to on the physical memory, and swaps out the ones with lesser importance. If it's needed again it’ll swap it back in. There are bits on the memory address to identify which is present in the physical memory. There are different ways to swap out/in the memory or replacement policies, such as the FIFO which evicts the first page in the cache, LRU which takes the page of the cache that has not been used in awhile, random which takes a random page in the cache to evict, and optimal which evicts the page used furthest in the future.  

Comments