1834926 Members
2650 Online
110071 Solutions
New Discussion

ls Command Buffer Limit

 
Kris Jugo
New Member

ls Command Buffer Limit

Does anyone know if there is a 'buffer limit' in HP UNIX server when doing an
ls command with a quantifier, for example: ls *.IRCOMPLX ?

Some claims have been made that when this type of command is executed on an FTP
session and there are too many files in the directory that an error message
occurs to the effect that a buffer limit has been reached and so all the files
in the directory do not list. We have been unable to duplicate this condition
from any of our non-HP UNIX servers so we are having a difficult time
troubleshooting it.

Any ideas if this 'buffer limit' claim is true or are there other explanations?
1 REPLY 1
Paul Hite_2
Frequent Advisor

Re: ls Command Buffer Limit

There is certainly a limit. What that limit is depends on how ftpd is
operating internally. To get a clue, I downloaded the source code for wu-ftpd
and examined it. That was very interesting. It invokes a special version of
popen that does not invoke a shell. It does this so a cracker cannot open a
pipe to arbitrary commands by creating screwy filenames. Interesting security
problem.

I am going to guess that HP is using a conventional popen(3) that does invoke a
shell. Indeed, all shells have a command buffer, at least LINE_MAX bytes big.
Commands that are too large will not work.

Even with the world largest command line buffer, there is a second limit. The
ls program must be invoked by exec(2) in order to run and exec(2) imposes a
limit on the number of its arguments. This limit is at least ARG_MAX.

My limits(5) manpage says that ARG_MAX is 5120 and LINE_MAX is 2048. But you
should check your manpage.

Maybe wu-ftpd will increase the apparent limit for you. But I really must add
that directories should not have so many files that these limits are issues.
The are performance problems with large directories.