1850433 Members
3220 Online
104054 Solutions
New Discussion

Re: little doubt...

 
sukumar maddela
Occasional Advisor

little doubt...

hi all,

when i execute "rm -rf *" in one folder, that folder contains 3,90,345 files. it is giving error like parameter can not handle. i am thinking that, * can not handle that huge number. it can handle up to some number. can somebody tell me up to what number, it can accept.
10 REPLIES 10
A. Clay Stephenson
Acclaimed Contributor

Re: little doubt...

It's not a number but rather a total size so that not only the number but the length of each entry contributes to the argmax value. w/o knowing the version of UNIX you are running, it's not possible to tell you more. Some versions of UNIX have a tunable like argmax or ncargs that allow you to adjusat this value. In any event, having that many files in a directory is dumb. You can use the xarg comand to allow you to process essentially unlimited numbers of parameters by dividing the command into managable chunks. Man xargs for details.
If it ain't broke, I can fix that.
Steven E. Protter
Exalted Contributor

Re: little doubt...

Shalom sukumar.

Too many files for one delete. Too many arguments.

All programs, even rf have limits to the number of arguments. * is actually parsed into 3.9 million arguments in this case.

Here is a workaround.

ls -1 > filelist
while read -r filename
do
rm -f $filename
done < filelist

This will get it done.

Also: restructure your storage to prevent this many files from being in one folder. An ls command can take days to execute under these circumstances.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Robert-Jan Goossens
Honored Contributor

Re: little doubt...

Hi,

Check Bill's answer in this thread.

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=103792

You could use fine and xargs.

# find /folder -xdev | xargs rm {}

Best regards,
Robert-Jan
Sergejs Svitnevs
Honored Contributor

Re: little doubt...

check "getconf ARG_MAX"
It can handle up to this number.

Try using find with '-exec'; a new task is spawned for every file to remove.

# find /your_dir -name * -exec rm {} \;

Regards,
Sergejs
sukumar maddela
Occasional Advisor

Re: little doubt...

first i tried with script, at that moment it is using only 39 % of cpu. and only 100 files are deleted for 2 minutes. but when i remove by direct deletion using rm with limited noof files(around 2000) it has taken 99 % of cpu utilization. and all 2000 files are removed in 5 minutes. right now i would like to know how to remove file with out any time delay.
Bill Thorsteinson
Honored Contributor

Re: little doubt...

If you want to have an empty directory with
minimal delay try something like the
following. You will still have a delay on releasing space from the old files.
This will also release the space required
by the directory.

# Edit next three lines appriately
DIRNAME=x
MODE=640
OWNER=owner:group
mkdir newdir.$$
chmod ${MODE} newdir.$$
chown ${OWNER} newdir.$$
mv ${DIRNAME} ${DIRNAME}.$$
mv newdir.$$ ${DIRNAME}
find ${DIRNAME}.$$ -print0 | xargs -0 rm
rmdir ${DIRNAME}.$$
Steve Steel
Honored Contributor

Re: little doubt...

Hi

It will never be instant but you could try

echo rm -r $(ls -1)


If it works remove the echo and do

rm -r $(ls -1)


Steve Steel
If you want truly to understand something, try to change it. (Kurt Lewin)
James R. Ferguson
Acclaimed Contributor

Re: little doubt...

HI:

Well, nothing is free. The the most efficient removal is probably going to be achieved by leveraging 'xargs' to bundle groups of file(names) for removal by 'rm'.

If you use '-exec' with a 'find' you are going to spawn a new task for every file to remove -- certainly very resource intensive.

Using 'rm' with one file at a time is probably going to be more costly than an 'xargs' solution too, since, again, a new process will need to be created for each file handled.

Regards!

...JRF...
A. Clay Stephenson
Acclaimed Contributor

Re: little doubt...

Each unlink takes a finite time so what you ask can't be done. Large directories are especially bad because the directory must be locked for each unlink and written to prevent other processes from updating the directory at the same time. You will find it far more efficient to rebuild the filesystem than to remove each file.
If it ain't broke, I can fix that.
Doug O'Leary
Honored Contributor

Re: little doubt...

Hey;

J.Furgeson had the correct answer

ls | xargs -i rm {}

It'll take some time but will be much more efficient than looping through each file.

HTH;

Doug

------
Senior UNIX Admin
O'Leary Computers Inc
linkedin: http://www.linkedin.com/dkoleary
Resume: http://www.olearycomputers.com/resume.html