|
![](/i/fill.gif) |
Le 28/02/2012 19:49, Kevin Wampler nous fit lire :
> I have a directory which (I think) has many many small files in it. I'd
> like to remove these files, however every attempt I've tried fails. For
> example:
>
> cd <dir>; rm *
> rm -r <dir>
> ls <dir>
> find <dir>
>
> All quickly consume my 8GB of memory and grind to an apparent halt. I
> haven't been able to even determine the name of a single file within
> that directory, let alone delete anything. Does anyone have any ideas?
>
>
do not have the shell expand * ;
> Notes:
>
> 1) I'm assuming the problem is a large number of files, but as I can't
> get any info about the contents of the directory I don't know for sure
> if this is the case or it's some other problem.
What is the output of : ls -ld <dir>
(notice the d), in particular the size attribute
many many many files in a directory would lead to huge size.
Check also the rights on directory.
Also possible: drive is full, check the output of "df <dir>"
(rules of thumb: 5% of partition is reserved to root, unless tuned
otherwise explicitly with ext2/3/4. )
>
> 1) This is on NFS, in case that matters.
Yes, it means we cannot use the lovely unlink.
>
> 2) I don't have root privileges.
Of course not, root does not traverse NFS. (unless serving system is
changed (and you do not want that))
I wonder about :
$ find <dir> -type f -exec rm {} \;
$ rm -rf <dir>
(notice the f)
How is the network load when you tried to access that directory ?
Post a reply to this message
|
![](/i/fill.gif) |