# high number of files in a directory, cannot be accessed or deleted.



## chrcol (Feb 15, 2010)

I have a server using maildir setup, it has a directory with what I believe to have hundreds of thousands of files inside it.  I frequently have to kill a find process which gets stuck navigating the directory and so I wanted to delete the directory.

If I try to delete the directory the command runs for about 30 hours and eventually the server gives up and reboots itself.

If needed I can do this offline in a rescue system, so is there a way to fix the directory with the drive unmounted?


----------



## User23 (Feb 15, 2010)

is 
	
	



```
ls
```
 working?


----------



## DutchDaemon (Feb 15, 2010)

Yeah, ls with a while/rm loop will usually work in those cases. It just deletes each file sequentially and frees resources after each delete.


----------



## SirDice (Feb 15, 2010)

DutchDaemon said:
			
		

> Yeah, ls with a while/rm loop will usually work in those cases. It just deletes each file sequentially and frees resources after each delete.



[cmd=]find . -type f -exec rm {} \;[/cmd]


----------



## DutchDaemon (Feb 15, 2010)

Sure. I could've _sworn_ I also read that 'find' caused problems here. Must've been my evil twin. Then again, I haven't seen find's performance on this amount of files. It will probably load everything (the entire file list) up before it starts the rm process instead of starting from the top and rm'ing each file it finds immediately? This may lead to the same exhaustion of resources as a 'rm -rf', but I may be wrong. Anyway, the 'ls' road is available as a last resort, and I've never seen it explode on a huge amount of files, and the upside is that you only need to list the first-level directory and feed it to 'rm -rf' (unless all of the sub-directories contain thousands of files as well) -- anyhoo.


----------



## SirDice (Feb 15, 2010)

AFAIK find will just scan the filesystem and -exec for each file found. So this won't have the same limitation a [cmd=]rm -rf *[/cmd] would have. 

But, it will take a lot of time and there's probably going to be a lot of disk thrashing.


----------



## DutchDaemon (Feb 15, 2010)

Maybe try a [cmd=]find . -type d -exec rm -rf {} \;[/cmd] first to get rid of big directories (if there are any).


----------



## SirDice (Feb 15, 2010)

Tried that once, might still fail if there are a lot of files in those subdirectories. 
It's probably best to delete the files first then do another run to delete the directories.


----------



## chrcol (Feb 15, 2010)

User23 said:
			
		

> is
> 
> 
> 
> ...



this just runs forever as well. and eventually the ssh connection bombs out.

will try the commands suggested guys thanks and I will provide feedback if works.


----------



## LateNiteTV (Feb 15, 2010)

run it in the background.


----------



## cracauer@ (Feb 16, 2010)

If it reboots it is probably a kernel panic.

Would be useful to gather the backtrace from that.

Failing that, to reproduce the problem, can you tell us how many files exactly you have there? And what filesystem?


----------



## z662 (Feb 19, 2010)

Wouldnt 'xarg' be an appropriate approach?

http://www.sunmanagers.org/pipermail/summaries/2005-March/006255.html


----------

