<P> A TLB has a fixed number of slots containing page table entries and segment table entries; page table entries map virtual addresses to physical addresses and intermediate table addresses, while segment table entries map virtual addresses to segment addresses, intermediate table addresses and page table addresses . The virtual memory is the memory space as seen from a process; this space is often split into pages of a fixed size (in paged memory), or less commonly into segments of variable sizes (in segmented memory). The page table, generally stored in main memory, keeps track of where the virtual pages are stored in the physical memory . This method uses two memory accesses (one for the page table entry, one for the byte) to access a byte . First, the page table is looked up for the frame number . Second, the frame number with the page offset gives the actual address . Thus any straightforward virtual memory scheme would have the effect of doubling the memory access time . Hence, the TLB is used to reduce the time taken to access the memory locations in the page table method . The TLB is a cache of the page table, representing only a subset of the page table contents . </P> <P> Referencing the physical memory addresses, a TLB may reside between the CPU and the CPU cache, between the CPU cache and primary storage memory, or between levels of a multi-level cache . The placement determines whether the cache uses physical or virtual addressing . If the cache is virtually addressed, requests are sent directly from the CPU to the cache, and the TLB is accessed only on a cache miss . If the cache is physically addressed, the CPU does a TLB lookup on every memory operation and the resulting physical address is sent to the cache . </P> <P> In a Harvard architecture or modified Harvard architecture, a separate virtual address space or memory access hardware may exist for instructions and data . This can lead to distinct TLBs for each access type, an Instruction Translation Lookaside Buffer (ITLB) and a Data Translation Lookaside Buffer (DTLB). Various benefits have been demonstrated with separate data and instruction TLBs . </P> <P> The TLB can be used as a fast lookup hardware cache . The figure shows the working of a TLB . Each entry in the TLB consists of two parts: a tag and a value . If the tag of the incoming virtual address matches the tag in the TLB, the corresponding value is returned . Since the TLB lookup is usually a part of the instruction pipeline, searches are fast and cause essentially no performance penalty . However, to be able to search within the instruction pipeline, the TLB has to be small . </P>

Explain paging hardware with tlb with the help of diagram