|
|
Eero Ahonen wrote:
> It's not a about Linux, it's about filesystems. How many times you have
> actually managed to measure any benefit from defragging NTFS partition?
I have, in very rare cases. But that's when I've created a file in a way
that (for example) made 300,000 fragments of it, which happens very rarely.
(It was screwing up some other disk analysis tool which apparently decided
to allocate fixed memory blocks for stuff and not check they were big enough.)
The thing is, the files don't fragment if you don't write to them, and the
files that you want to read fast and sequentially are generally things like
programs or images or something like that, where you write them all in one
chunk to start with. Who cares if your mailbox file is fragmented? Plus,
Windows has had for a long time an API to preallocate files, so if you're
doing something like copying files, it doesn't get fragmented anyway.
One good way to get fragments is to compress a file. Since it's on-the-fly
compression, what happens is that you write a (I think) 64K segment of file,
and when you move away from it, it gets compressed down to however many 4K
segments it'll fit into, then rewritten (either in the same place or
elsewhere). So if you compress a big file after the fact, you'll find it
has bunches of fragments. They're all right near each other, but they're
fragments because there's gaps in between someone else can now use.
One of the things Windows does interesting is that it'll rearrange *where*
on the disk files are as it defrags, based on usage (and based on the defrag
tool you use, of course). So it's not just "defragging files", but "putting
an EXE close to its DLLs" and "putting programs you run at login all close
together" and stuff like that.
--
Darren New, San Diego CA, USA (PST)
Why is there a chainsaw in DOOM?
There aren't any trees on Mars.
Post a reply to this message
|
|