Operating System - HP-UX
1820261 Members
3051 Online
109622 Solutions
New Discussion юеВ

/usr/bin/rm: arg list too long

 
Matthew Pegge_1
Frequent Advisor

/usr/bin/rm: arg list too long

I have the problem above when trying to remove a huge number of files from a specific dir. We are running 10.20 and have previously had a similar problem with the ls command which was resolved by a patch and enabling max_args in the kernal. I would have thought that this would have resovled this issue too? Any ideas?
10 REPLIES 10
Mark Grant
Honored Contributor

Re: /usr/bin/rm: arg list too long

There are several ways to do this after you change into the directory containing the files

ls | xargs rm

find . -exec rm {} \;

rm [a-l]*;rm [m-z]* might work too.
Never preceed any demonstration with anything more predictive than "watch this"
Vijaya Kumar_3
Respected Contributor

Re: /usr/bin/rm: arg list too long

Can you try this:

I use this command when i have so many files created under sendmail queue directory.

This will delete all the files in the current directory , so be CAREFUL.

These commands will delete all the files in /var/mail/queue:

cd /var/nail/queue

for i in *
do
rm $i
done

Hope this helps
-Vijay
Known is a drop, unknown is ocean - visit me at http://vijay.theunixplace.com
Sunil Sharma_1
Honored Contributor

Re: /usr/bin/rm: arg list too long

Hi,

this thread may help you ...

http://forums1.itrc.hp.com/service/forums/questionanswer.do?admit=716493758+1073555363536+28353475&threadId=211841

Sunil
*** Dream as if you'll live forever. Live as if you'll die today ***
John Carr_2
Honored Contributor

Re: /usr/bin/rm: arg list too long

mathew

be careful if you use the find command as it will do sub directories too. If thats not an issue then its a good way to do it. Otherwise try using some wildcards to reduce the number of files

rm *txt
rm backup*

:-) John.
John Carr_2
Honored Contributor

Re: /usr/bin/rm: arg list too long

If you are trying to empty the whole directory

rm -r /mydir
mkdir /mydir

:-) John.
Shahul
Esteemed Contributor

Re: /usr/bin/rm: arg list too long


Hi,

I think this is something to do with argument list length. There is kernel parameter large_ncargs_enable, if this is set to 0, the maximum argument list can be up to 20478 bytes. If this parameter is enabled argument list to any command can be up to 20478000 bytes.

Otherwise you can use any of the option listed above, like xargs, find ...etc.

Hope this helps
Shahul
Matthew Pegge_1
Frequent Advisor

Re: /usr/bin/rm: arg list too long

Thanks all... I have used all of these methods before however the problem is really that we have lots of scripts that use this method and work fine except on a fex systems where the dir gets too large. I was hoping there would be a fix to stop this rather than a workaround as we did with the ll/ls, this was indeed fixed by the last chaps suggestion, so this is not the cause! Thanks anyway!
Heiner E. Lennackers
Respected Contributor

Re: /usr/bin/rm: arg list too long

Hi,

if you try a "rm " and the pattern explodes to a to long list, you can use the very usefull "xargs" command to split it:

echo | xargs rm


Heiner
if this makes any sense to you, you have a BIG problem
Dave Hutton
Honored Contributor

Re: /usr/bin/rm: arg list too long

Looking back at some old notes. We used to add this to the kernel:
large_ncargs_enabled 1
And recompile the kernel by hand. This was true for 10.20, I don't recall doing it for 11.x.
We used to have super long directories of thousands of files for arched stat data packets. Often if our scripts that didn't go through and whack these files I would have to go in by hand to delete them. On the servers we didn't have this, I had to loop through the files like said above. But the servers that we added this line to I never seemed to of ran into any limitation to the number of files in the directory.
We never had issues after enabling it.

Dave
Bill Hassell
Honored Contributor

Re: /usr/bin/rm: arg list too long

No matter how many patches extend the argument list (actually, the command line) it will not be enough. The patch that extended the maximum command line might need to be enhanced to allow 500 megs in some (very bad) situations. Processes that create thousands to millions of files in a flat directory are creating a massive problem. If you can't eliminate mechanism that creates the massive number of files, then every user must be taught how to manage these directories (and associated performance impact due to directory management overhead)

For instance, ls and ll should never be run on the directory without an highly limited pattern match. For instance, if you need a file that starts with aaaa, don't type ls a* since the * may match 10,000 filenames and you'll get the arg list too long message. Instead, the filename must be explicit enough to match only a handful of files. Otherwise, the users and script writers will have to become familiar with xargs.


Bill Hassell, sysadmin