1847874 Members
4425 Online
104021 Solutions
New Discussion

Anyone ever seen this?

 
Steve Sauve
Frequent Advisor

Anyone ever seen this?

I got a scripting question from a user. He wanted to get the name of the newest file in a directory. I tested and then sent him the script:
ls -t /tmp/xyz.* | head -1
This worked fine for me, but for the user it returned an "arguement to long" error.
Has anyone ever seen this before? We're on the same box, and are both regular users. When I su - to root it can run the command either (again argument too long errors). If I just su over then it works. Anyone have any thoughts? I'm not looking for a work around (already gave him one) but more for an explaination of why it works for one of us and not another (or at least a direction to start checking). Thanks a lot...
Steve
7 REPLIES 7
John Palmer
Honored Contributor

Re: Anyone ever seen this?

If there are a lot of files called xyz.* (several thousand) then you can exceed the maximum number of arguments that the shell can provide to the 'ls' command.

You could get around it with
ls -t /tmp|grep "^xyz."|head -1
Steve Sauve
Frequent Advisor

Re: Anyone ever seen this?

That's actually exactly what I did..and there are quite a few xyz files out there (I believe it's somewhere around 1100).
However, what I'm wondering is why one user works and another doesn't.

Thanks
Steve
Rick Garland
Honored Contributor

Re: Anyone ever seen this?

Use of the 'xargs' command will allow this to go away. Too many files to parse through and the buffer is not big enough. First time you ran, you probablt had the buffer space and the subsequent times it was run, buffer space ran out.
John Palmer
Honored Contributor

Re: Anyone ever seen this?

That should be:-

ls -t /tmp|grep "^xyz\."|head -1

I forgot about the backslash.

You might also want to ensure that ls writes single column output with 'ls -1t' but it appears to default to this when piped.
Shannon Petry
Honored Contributor

Re: Anyone ever seen this?

Here is my guess....

There are too many files in /tmp, causing the error. root has unlimited space for environment, etc... (Modified by "ulimit"). The user is being limited by the soft or perhaps hard system resource limits. In this respece, the output from ls is too large to fit into the environment.
To fix this, you can set the limits of the user to the hardlimit. see "man ulimit" for more information; or you can delete a whole lot of stuff in temp. If this is not possible, you could try to do something like
> find . -mtime +1|xargs ls -t|head -1
This should cut the amount of data being passed from one command to another.

Hope this helps!
Shannon
Microsoft. When do you want a virus today?
John Palmer
Honored Contributor

Re: Anyone ever seen this?

Well it could be the type of shell that they use. root has /sbin/sh which is similar to but not the same as /usr/bin/sh.

Are different shells involved?
Tom Danzig
Honored Contributor

Re: Anyone ever seen this?

It's almost certain that the shell being used is what's causing the different behavior. When you "su - root", you use /sbin/sh. If you "su root", you'll use whatever shell you were using before the su.