<P> External fragmentation tends to be less of a problem in file systems than in primary memory (RAM) allocation systems, because programs usually require their RAM allocation requests to be fulfilled with contiguous blocks, but file systems typically are designed to be able use any collection of available blocks (fragments) to assemble a file which logically appears contiguous . Therefore, if a highly fragmented file or many small files are deleted from a full volume and then a new file with size equal to the newly freed space is created, the new file will simply reuse the same fragments that were freed by the deletion . If what was deleted was one file, the new file and will be just as fragmented as that old file was, but in any case there will be no barrier to using all the (highly fragmented) free space to create the new file . In RAM, on the other hand, the allocation systems used often cannot assemble a large block to meet a request from small noncontiguous free blocks, and so the request cannot be fulfilled and the program cannot proceed to do whatever it needed that memory for (unless it can reissue the request as a number of smaller separate requests). </P> <P> The most severe problem caused by fragmentation is causing a process or system to fail, due to premature resource exhaustion: if a contiguous allocation is needed and cannot be satisfied, failure occurs . Fragmentation causes this to occur even if there is enough of the resource, but not a contiguous amount . For example, if a computer has 4 GiB of memory and 2 GiB are free, but the memory is fragmented in an alternating sequence of 1 MiB used, 1 MiB free, then a request for 1 contiguous GiB of memory cannot be satisfied even though 2 GiB total are free . </P> <P> In order to avoid this, the allocator may, instead of failing, trigger a defragmentation (or memory compaction cycle) or other resource reclamation, such as a major garbage collection cycle, in the hope that it will then be able to satisfy the request . This allows the process to proceed, but can severely impact performance . </P> <P> Fragmentation causes performance degradation for a number of reasons . Most basically, fragmentation increases the work required to allocate and access a resource . For example, on a hard drive or tape drive, sequential data reads are very fast, but seeking to a different address is slow, so reading or writing a fragmented file requires numerous seeks and is thus much slower, in addition to causes greater wear on the device . Further, if a resource is not fragmented, allocation requests can simply be satisfied by returning a single block from the start of the free area . However it is fragmented, the request requires either searching for a large enough free block, which may take a long time, or fulfilling the request by several smaller blocks (if this is possible), which results in this allocation being fragmented, and requiring additional overhead to manage the several pieces . </P>

Example of internal and external fragmentation in os