1833750 Members
3080 Online
110063 Solutions
New Discussion

du runs forever

 
S.Rider
Regular Advisor

du runs forever

I was asked to check into some long-running backups on an unfamilar system. The filesystem is approx 75gb used and has approx 4.5 M inodes in use.
A "du -srkx ./*" has spent about 1:20 so far on one of the subdirs that has approx 14k directorys within it. I'm watching the bopen files via Glance and it is moving along. Haven't see any errors pop either.
Anyone got any ideas why du would be so slow.
Ride Boldly Ride, but watch out for El Dorado's
7 REPLIES 7
RAC_1
Honored Contributor

Re: du runs forever

Du will go thourgh each inode to know the file size. And if you have 14k dirs, it off course will take time.

Anil
There is no substitute to HARDWORK
TwoProc
Honored Contributor

Re: du runs forever

Question: was this faster before?

It's probably because that subdir tree that you've asked it to walk has a sizeable chunk of those 4.5 Million inodes. :-)

All kidding aside, I've seen directories exhibit this behavior after having lots and lots of little files and being nearly full.

Let it run, and finish so you can see what you're dealing with.

It could be that the problem is that you've got that many inodes out there, and it is what it is.

But it could also be just a simple case of backing up everything out there, doing a newfs on it, and restoring everthing back onto the mount point.
We are the people our parents warned us about --Jimmy Buffett
Dani Seely
Valued Contributor

Re: du runs forever

Hey Jay,
Seriously, the command you used is a valid command, the size of your filesystem (75 GB) is just HUGE and it will take time to traverse the files in order to total up the disk usage ... so, just let it run. If it's tying up your screen, open another xterm window or submit your command in the background and have it dump the output to a file. Otherwise, if the command running does not hinder your work, just let it run.

Hang in there.
Together We Stand!
Bill Hassell
Honored Contributor

Re: du runs forever

And if this is a production server, the du command will severely impact the filesystems it is searching (and slow down du too). You may need to run the du afterhours. There is no way to speed up the analysis of 4.5 million inodes.


Bill Hassell, sysadmin
S.Rider
Regular Advisor

Re: du runs forever

FYI - my "du" finished after 4.5 hours, showing one of it's subdirs had approx 70gb worth of data. I suspected that subdir in advance so the script I ran did a "du" of that guy next. That one took just over 6 hours (which I'm guessing was due to some backup load on the system). I'll be drilling down a couple more levels tonight.
By the way, the filesystem size and nameing conventions within it, and retention periods were setup way before my time and I've had multiple people tell me there ain't no way that part is going to get straightened out.
Ride Boldly Ride, but watch out for El Dorado's
Niraj Kumar Verma
Trusted Contributor

Re: du runs forever

Hi,

Is there any faster alternative to du ?
Niraj.Verma@philips.com
Bill Hassell
Honored Contributor

Re: du runs forever

It isn't the amount of data (70 Gb), it is the number of files and directories to traverse. There is nothing you can do to 'fix' problems with a massively large number of files and directories. Commands like find and du must traverse the directory to obtain information about the files. There is no alternative or faster way to do this. The fact that Unix has no practical limits for the number of files in a directory does not make such a design a good thing.

Now you can speed up access and directory searches by replacing the 70Gb disk space with a RAM disk appliance. Might be (OK, it is really) pricey, but response time is phenomenal.


Bill Hassell, sysadmin