<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: du runs forever in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496269#M215835</link>
    <description>Du will go thourgh each inode to know the file size. And if you have 14k dirs, it off course will take time.&lt;BR /&gt;&lt;BR /&gt;Anil</description>
    <pubDate>Wed, 02 Mar 2005 13:49:22 GMT</pubDate>
    <dc:creator>RAC_1</dc:creator>
    <dc:date>2005-03-02T13:49:22Z</dc:date>
    <item>
      <title>du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496268#M215834</link>
      <description>I was asked to check into some long-running backups on an unfamilar system. The filesystem is approx 75gb used and has approx 4.5 M inodes in use.&lt;BR /&gt;A "du -srkx ./*" has spent about 1:20 so far on one of the subdirs that has approx 14k directorys within it. I'm watching the bopen files via Glance and it is moving along. Haven't see any errors pop either.&lt;BR /&gt;Anyone got any ideas why du would be so slow.</description>
      <pubDate>Wed, 02 Mar 2005 13:45:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496268#M215834</guid>
      <dc:creator>S.Rider</dc:creator>
      <dc:date>2005-03-02T13:45:39Z</dc:date>
    </item>
    <item>
      <title>Re: du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496269#M215835</link>
      <description>Du will go thourgh each inode to know the file size. And if you have 14k dirs, it off course will take time.&lt;BR /&gt;&lt;BR /&gt;Anil</description>
      <pubDate>Wed, 02 Mar 2005 13:49:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496269#M215835</guid>
      <dc:creator>RAC_1</dc:creator>
      <dc:date>2005-03-02T13:49:22Z</dc:date>
    </item>
    <item>
      <title>Re: du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496270#M215836</link>
      <description>Question: was this faster before?&lt;BR /&gt;&lt;BR /&gt;It's probably because that subdir tree that you've asked it to walk has a sizeable chunk of those 4.5 Million inodes.  :-)&lt;BR /&gt;&lt;BR /&gt;All kidding aside, I've seen directories exhibit this behavior after having lots and lots of little files and being nearly full.&lt;BR /&gt;&lt;BR /&gt;Let it run, and finish so you can see what you're dealing with.&lt;BR /&gt;&lt;BR /&gt;It could be that the problem is that you've got that many inodes out there, and it is what it is.&lt;BR /&gt;&lt;BR /&gt;But it could also be just a simple case of backing up everything out there, doing a newfs on it, and restoring everthing back onto the mount point.&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Mar 2005 13:56:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496270#M215836</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2005-03-02T13:56:13Z</dc:date>
    </item>
    <item>
      <title>Re: du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496271#M215837</link>
      <description>Hey Jay,&lt;BR /&gt;Seriously, the command you used is a valid command, the size of your filesystem (75 GB) is just HUGE and it will take time to traverse the files in order to total up the disk usage ... so, just let it run.  If it's tying up your screen, open another xterm window or submit your command in the background and have it dump the output to a file.  Otherwise, if the command running does not hinder your work, just let it run.&lt;BR /&gt;&lt;BR /&gt;Hang in there.</description>
      <pubDate>Wed, 02 Mar 2005 15:59:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496271#M215837</guid>
      <dc:creator>Dani Seely</dc:creator>
      <dc:date>2005-03-02T15:59:14Z</dc:date>
    </item>
    <item>
      <title>Re: du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496272#M215838</link>
      <description>And if this is a production server, the du command will severely impact the filesystems it is searching (and slow down du too). You may need to run the du afterhours. There is no way to speed up the analysis of 4.5 million inodes.</description>
      <pubDate>Wed, 02 Mar 2005 23:13:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496272#M215838</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2005-03-02T23:13:09Z</dc:date>
    </item>
    <item>
      <title>Re: du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496273#M215839</link>
      <description>FYI - my "du" finished after 4.5 hours, showing one of it's subdirs had approx 70gb worth of data. I suspected that subdir in advance so the script I ran did a "du" of that guy next. That one took just over 6 hours (which I'm guessing was due to some backup load on the system). I'll be drilling down a couple more levels tonight. &lt;BR /&gt;By the way, the filesystem size and nameing conventions within it, and retention periods were setup way before my time and I've had multiple people tell me there ain't no way that part is going to get straightened out.</description>
      <pubDate>Thu, 03 Mar 2005 09:44:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496273#M215839</guid>
      <dc:creator>S.Rider</dc:creator>
      <dc:date>2005-03-03T09:44:47Z</dc:date>
    </item>
    <item>
      <title>Re: du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496274#M215840</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Is there any faster alternative to du ?</description>
      <pubDate>Thu, 06 Apr 2006 07:53:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496274#M215840</guid>
      <dc:creator>Niraj Kumar Verma</dc:creator>
      <dc:date>2006-04-06T07:53:35Z</dc:date>
    </item>
    <item>
      <title>Re: du runs forever</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496275#M215841</link>
      <description>It isn't the amount of data (70 Gb), it is the number of files and directories to traverse. There is nothing you can do to 'fix' problems with a massively large number of files and directories. Commands like find and du must traverse the directory to obtain information about the files. There is no alternative or faster way to do this. The fact that Unix has no practical limits for the number of files in a directory does not make such a design a good thing.&lt;BR /&gt; &lt;BR /&gt;Now you can speed up access and directory searches by replacing the 70Gb disk space with a RAM disk appliance. Might be (OK, it is really) pricey, but response time is phenomenal.</description>
      <pubDate>Thu, 06 Apr 2006 10:33:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/du-runs-forever/m-p/3496275#M215841</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2006-04-06T10:33:04Z</dc:date>
    </item>
  </channel>
</rss>

