As Clay mentions, this has nothing to do with ls. When you use ls like this:
ls /dir_with_10k_files/*
ls *NEVER* sees the "*". That character is preprocessed by the shell. It is replaced with all the filenames that match the *. Let's suppose there are 2 files named AB1 and AB2 and you want to use ls:
ls /dir_with_10k_files/AB*
You'll see the two files listed even though there are 10,000 files in the directory. What was passed to ls were the full pathnames of the 2 files. To better see this, use echo instead:
echo /dir_with_10k_files/AB*
Now try the same echo with all the files:
echo /dir_with_10k_files/*
and you'll see "arg list too long"
The shell only has a small buffer to expand the names. It used to be about 20k bytes and with a patch was increased to a couple of megs. NOTE: the buffer can never be made large enough! No matter how big the next patch might be, someone will create a massively large directory that will still exceed the latest buffer size and cause a problem.
So there are two steps:
First, do whatever you can to stop allowing massively large flat filesystems (directories with thousands of files). You are just beginning to se some of the difficulties in managing hundreds or thousands of files in one directory.
Second, start looking at xargs to filter long lists. xargs is like a small truck -- it picks up long lists and sends them out in small bundles.
Bill Hassell, sysadmin