POV-Ray : Newsgroups : povray.off-topic : Trouble with large directory in Linux : Re: Trouble with large directory in Linux Server Time
29 Jul 2024 06:14:54 EDT (-0400)
  Re: Trouble with large directory in Linux  
From: Le Forgeron
Date: 28 Feb 2012 14:38:27
Message: <4f4d2d33$1@news.povray.org>
Le 28/02/2012 19:49, Kevin Wampler nous fit lire :
> I have a directory which (I think) has many many small files in it.  I'd
> like to remove these files, however every attempt I've tried fails.  For
> example:
> 
> cd <dir>; rm *
> rm -r <dir>
> ls <dir>
> find <dir>
> 
> All quickly consume my 8GB of memory and grind to an apparent halt.  I
> haven't been able to even determine the name of a single file within
> that directory, let alone delete anything.  Does anyone have any ideas?
> 
> 

do not have the shell expand * ;



> Notes:
> 
> 1) I'm assuming the problem is a large number of files, but as I can't
> get any info about the contents of the directory I don't know for sure
> if this is the case or it's some other problem.

What is the output of : ls -ld <dir>
(notice the d), in particular the size attribute
many many many files in a directory would lead to huge size.
Check also the rights on directory.

Also possible: drive is full, check the output of "df <dir>"
(rules of thumb: 5% of partition is reserved to root, unless tuned
otherwise explicitly with ext2/3/4. )
> 
> 1) This is on NFS, in case that matters.

Yes, it means we cannot use the lovely unlink.

> 
> 2) I don't have root privileges.

Of course not, root does not traverse NFS. (unless serving system is
changed (and you do not want that))

I wonder about :
 $ find <dir> -type f -exec rm {} \;
 $ rm -rf <dir>
(notice the f)

How is the network load when you tried to access that directory ?


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.