1842280 Members
3512 Online
110188 Solutions
New Discussion

Re: ls listing problem

 
SOLVED
Go to solution
jerry1
Super Advisor

ls listing problem

How do you increase buffer size/other on
hp-ux 10.20 for "ls" when listing lots of
files.
7 REPLIES 7
James R. Ferguson
Acclaimed Contributor
Solution

Re: ls listing problem

Hi Jerry:

Enable (set to one (1)) the kernel parameter 'large_ncargs_enabled' and regenerate your kernel.

Regards!

...JRF...
Peter Godron
Honored Contributor

Re: ls listing problem

Jerry,
do you mean you get "Arguments list too long" ?

See Dave's comment in:
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=354506

"large_ncargs_enabled 1
And recompile the kernel by hand."

If you listing lots and lots of files to screen, and you do not have the "arguments list too long", re-direct the list into a file, which may prove quicker.
A. Clay Stephenson
Acclaimed Contributor

Re: ls listing problem

... and you are really asking the fundamentally wrong question. The question is "Why are there so many files in this directory?" because no matter how large ncargs is, there will still be a problem at some point.
If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: ls listing problem

ls does not have any limits. It simply takes the command line as supplied from the shell. But your shell is expanding filenames prior to passing them to ls. To see what ls (or any other program will see, put echo in front:

echo ls *

You will see that * is a special character for the shell and is automatically replaced with a list of all the filenames in the current directory. If this list is more than several megs long, the error message is produced (not from ls) by the shell. The maximum line length in bytes is:

getconf ARG_MAX

Now if you have tens of thousands of files in a single directory, you will ALWAYS run into this limit. Read the man page for xargs and look at ways to reduce the total line length when using shell globbing characters. Note that large_ncargs_enabled will increase the line length but no matter how long you make it, there can be a list longer than that.


Bill Hassell, sysadmin
jerry1
Super Advisor

Re: ls listing problem

Unfortunatly. Changing the legacy code to
not use "ls" and to use something like "find"
is not an option now. It's coded everywhere.
A. Clay Stephenson
Acclaimed Contributor

Re: ls listing problem

Your problems go way beyond ls because ls isn't the fundmental problem. Anytime you supply '*' as an argument to any command in this directory, the shell is going to explode. If memory serves, you can't simply build a kernel defining large_ncarg_args; there was a patch for 10.X that you had to install first -- so good luck finding it in the legacy patch database. Be glad you didn't live in the days of real UNIX when ARGMAX was 5120 bytes and couldn't be changed -- and you could only have 20 file descriptors per process.
If it ain't broke, I can fix that.
Dennis Handly
Acclaimed Contributor

Re: ls listing problem

>Changing the legacy code to not use "ls"

Instead of using ls pattern, you might be able to just replace this by ls | grep RE-pattern.

That might be easier than find.