|
|
1) With all the broughaha going on about overwriting files and finding them
lost after a crash, what Linux really needs is a call to say "atomically
swap the contents of file X with the contents of file Y." Would be messy
permissions-wise, I imagine. FAT could use the same sort of thing pretty
easily, altho without journaling you're screwed anyway. With NTFS, you could
just put the write/delete/rename into a single transaction. The old
mainframe I worked on, you could open a file for "output, save", and the
name wouldn't go into the file system until you closed it with "close and
save". If you bombed out, by default the files got closed with "close and
delete", which means all your temp output files got cleaned up and the
previous version of the file was still around. I.e., you could say something
like the UNIX equivalent of "process <xyz >xyz" and it would be quite
happy, and the old xyz would survive if you interrupted it and the new xyz
would survive atomically when it finished.
2) I wonder for how many file writes the process opens the file, writes a
known number of bytes, then closes it, such that the process could inform
the OS at open time how big the file is likely to be, how often it's likely
to be read before being rewritten, and so on. Something like vim could
calculate the resulting file size and avoid having the system pick a place
where it's going to get fragmented. Of course, it could only be a hint, as
anything can guess wrong, get interrupted, or have the file it's copying
change while being copied, for example.
3) I wonder how these defragmenter programs know how many clusters they've
finished. I didn't think the API provided that information. Maybe there's a
new API that's more powerful than the original, for just that reason.
Ah, thinks to ponder while waiting for backups to finish. :-)
--
Darren New, San Diego CA, USA (PST)
Insanity is a small city on the western
border of the State of Mind.
Post a reply to this message
|
|