|
![](/i/fill.gif) |
You are of course right, but knowing that Mr. Tyler is using a Wintel
machine, I was just talking to him specifically, though I should have
said that in the email...Sorry for being Wintel myopic.
Steve
"John M. Dlugosz" wrote:
>
> Stephen Lavedas wrote in message <36C### [at] virginia edu>...
> > of course you realize that there are no
> >appreciable HD savings in this version since the minimum sector size is
> >4k and so any file less than 4K in size takes up 4K anyhow...
>
> Maybe on =your= disk.
>
> In NTFS, the MFT record which stores the demographics of the file can store
> the contents too, if it is short. So there is =no= space allocated for the
> file besides the MFT record which holds the name(s), size, permissions,
> dates, etc. On some UNIX file systems, data of a few hundred bytes can be
> stuffed in the iNode, which is the same principle.
>
> So on NTFS, all files take exactly 1K for the MFT (master file table)
> record, plus consume a couple bytes in an index (how the directories are
> kept). If the content is a few hundred bytes, it will go in the MFT too and
> the file only takes 1K.
>
> After that, it allocates clusters, where the cluster size is fully
> configurable at format time. I use fairly small clusters to prevent just
> this kind of waste. Microsoft's documentation claims, in all its wisdom,
> that a smaller cluster size will increase fragmentation. But on a fast SCSI
> drive it's never been a problem, and I can always defragment if needed.
> Anyway, a data drive could easily have clusters of 1K or 2K, not the
> wasteful 4K. One could even use half a K per cluster.
>
> --John
Post a reply to this message
|
![](/i/fill.gif) |