File System is important in Disk Management. Without it information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. History of file system begin in 1981, IBM introduced its first personal computer. The first IBM computer ran a new operating system designed by Microsoft, MS-DOS. The computer contained a 16-bit 8088 processor chip and two drives for low-density floppy disks. The MS-DOS file system , FAT (named for its file allocation table), provided more than enough power to format these small disk volumes and to manage hierarchical directory structures and files. the FAT file system continued to meet the needs of personal computer users even as hardware and software power increased year after year.
However, file searches and retrieval took significantly longer on large hard disks than on the original low-density floppy disks of the first IBM personal computer. By the end of the 1980’s, the prediction of ” a computer on every desk and in every home” was less a dream and more a reality. Personal computers now had 16-bit processors and hard disks of 40 MB. This is too big that users. The users had to partition their disks into two or more volumes because the file allocation table’s limit was 32 MB per volume in that era. (later versions of MS-DOS allowed for larger disk volumes).
File System In 1990
In 1990, a high -performance file system(HPFS) was introduced as a part of the os/2 operating system version 1.x. specifically for large hard disks on 16- bit processor computers. On the heels of HPFS came HPFS386, take advantage of the 32-bit 80386 processor chip. Today’s personal computers include a variety of very fast processor chips and can accommodate multiple, huge hard disks. The new Windows NT file system, NTFS, is designed for optimal performance on these computers. Because of features such as speed and universality, FAT or HPFS are now popular and widely used file systems. NTFS offers consistency with these two file systems, plus advanced functionality needed by corporations interested in greater flexibility and in data security.
Resilient File System (ReFS)
With codenamed “Protogon”, Microsoft introduced new Microsoft proprietary file system with Windows Server 2012 and the intent of becoming the next generation file system after NTFS. The advantages of ReFS include:
- Automatic integrity checking and data scrubbing,
- Removal of the need for running chkdsk,
- Protection against data degradation,
- Built-in handling of hard disk drive failure and redundancy,
- Integration of RAID functionality,
- A switch to copy/allocate on write for data and metadata updates,
- Handling of very long paths and filenames, and
- storage virtualization and pooling, including almost arbitrarily sized logical volumes (unrelated to the physical sizes of the used drives).
In early versions (2012–2013), ReFS was similar to or slightly faster than NTFS in most tests, but far slower when full integrity checking was enabled. A result attributed to the relative newness of ReFS. Pre-release concerns were also voiced by one blogger over Storage Spaces, the storage system designed to underpin ReFS. Reportedly could fail in a manner that prevented ReFS from recovering automatically. The ability to create ReFS volumes was removed in Windows 10’s 2017 Fall Creators Update for all editions except Enterprise and Pro for Workstations, which would seem to indicate Microsoft is no longer intending ReFS as a general replacement for NTFS, at least in the near future.