Filesystem fragmentation on the level of set of files
Posted
by
trismarck
on Super User
See other posts from Super User
or by trismarck
Published on 2012-03-10T10:44:09Z
Indexed on
2012/03/30
17:33 UTC
Read the original article
Hit count: 289
The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive.
What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks.
The questions:
- is the type of fragmentation I mentioned relevant for the file system?
- is it possible to remedy this kind of fragmentation and if yes, how would you do it?
Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).
© Super User or respective owner