<Li> Data has some inherent features when it is sorted on a key . All the values for subsets of the key appear together . When passing sequentially through groups of the data with the same key, or a subset of the key changes, this is referred to in data processing circles as a break, or a control break . It particularly facilitates aggregation of data values on subsets of a key . </Li> <Ul> <Li> Until the advent of non-volatile computer memories like USB sticks, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives . These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size . In this case, the seek location on the media, is the data key and the blocks are the data values . Early data file - systems, or disc operating systems used to reserve contiguous blocks on the disc drive for data files . In those systems, the files could be filled up, running out of data space before all the data had been written to them . Thus much unused data space was reserved unproductively to avoid incurring that situation . This was known as raw disk . Later file - systems introduced partitions . They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed . To achieve this, the file - system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table . Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due to latency . Modern file systems reorganize fragmented files dynamically to optimize file access times . Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives . </Li> </Ul> <Li> Until the advent of non-volatile computer memories like USB sticks, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives . These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size . In this case, the seek location on the media, is the data key and the blocks are the data values . Early data file - systems, or disc operating systems used to reserve contiguous blocks on the disc drive for data files . In those systems, the files could be filled up, running out of data space before all the data had been written to them . Thus much unused data space was reserved unproductively to avoid incurring that situation . This was known as raw disk . Later file - systems introduced partitions . They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed . To achieve this, the file - system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table . Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due to latency . Modern file systems reorganize fragmented files dynamically to optimize file access times . Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives . </Li> <Ul> <Li> Retrieving a small subset of data from a much larger set implies searching though the data sequentially . This is uneconomical . Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data . In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins . The most popular indexes are the B - tree and the dynamic hash key indexing methods . Indexing is yet another costly overhead for filing and retrieving data . There are other ways of organizing indexes, e.g. sorting the keys or correction of quantities (or even the key and the data together), and using a binary search on them . </Li> </Ul>

Where are programs and data sorted when the computer is off