POV-Ray : Newsgroups : povray.off-topic : Oh dear... : Re: Oh dear... Server Time
7 Sep 2024 09:23:33 EDT (-0400)
  Re: Oh dear...  
From: Darren New
Date: 14 Nov 2008 12:04:44
Message: <491dafac$1@news.povray.org>
Invisible wrote:
> Seems like it's the same amount of work to me, whether the kernel does 
> it or the application does it.

First, you've eliminated all the overhead of two kernel calls per file, 
which is something like 30% of a typical process' costs of execution.

Second, you can maintain a pointer to the file in the directory being 
deleted. (Well, on things like FAT or ext3 you could - on a file system 
like NTFS that actually rearranges the directories as you delete files, 
it might be harder.) But, in other words, instead of
   look up a file, delete the file, look up a file, delete the file
you have
   delete, delete, delete, delete

So say you have a singly-linked unsorted list of integer values, and you 
want to free up the memory.  What is faster:
   Look up "1", and unlink it.
   Look up "2", and unlink it.
   Look up "3", and unlink it....
or
   Unlink the first. Unlink the first. Unlink the first...


>> To be fair, NTFS and other tree-based directory systems have to rework 
>> the tree when you delete the files, so this too will be disk I/O 
>> overhead.
> 
> Um... you don't cache directory blocks, no? (Especially given that 
> they're usually non-contiguous and so take a lot of thrashing to access, 
> and there often heavily accessed.)

Sure. When your directory is bigger than your RAM, that doesn't help a 
whole lot.

Not that it matters - you still wind up scanning thru the blocks. See 
the linked-list example above.

> $500 seems like a hell of a lot of money to me...

Not for a business.  It's a heck of a lot cheaper than paying me to 
figure out how to do without. The computer they plugged into was $3500.

-- 
Darren New / San Diego, CA, USA (PST)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.