<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Filesystem Defragmentation Question in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467921#M624482</link>
    <description>I have a 100 GB filesystem, which suffers terribly from our bad implementation of software which we hope to fix soon.  Anyways, there are several million files from 3 - 30k a piece, averaging 200,000 files in a directory and several hunder directories.  I understand that performance suffers because of the number of files in a directory, however, could fragmentation of inodes and directories be an issue as well?  About 35 GB of files a month are put onto the system, and another 35 GB are archived off to a tar/zip on a cheaper storage system.  The fsadm output is below.  Can someone explain what these numbers mean, and at what point would you want to defrag a filesystem?&lt;BR /&gt;&lt;BR /&gt;TIA!&lt;BR /&gt;&lt;BR /&gt;"fsadm -F vxfs -D -E /interfaces"&lt;BR /&gt;&lt;BR /&gt;  Directory Fragmentation Report&lt;BR /&gt;             Dirs        Total      Immed    Immeds   Dirs to   Blocks to&lt;BR /&gt;             Searched    Blocks     Dirs     to Add   Reduce    Reduce&lt;BR /&gt;  total          3280    218842       518       196      1255       68161&lt;BR /&gt;&lt;BR /&gt;  Extent Fragmentation Report&lt;BR /&gt;        Total    Average      Average     Total&lt;BR /&gt;        Files    File Blks    # Extents   Free Blks&lt;BR /&gt;     11450869           1           1     6725426&lt;BR /&gt;    blocks used for indirects: 3626&lt;BR /&gt;    % Free blocks in extents smaller than 64 blks: 23.99&lt;BR /&gt;    % Free blocks in extents smaller than  8 blks: 3.32&lt;BR /&gt;    % blks allocated to extents 64 blks or larger: 1.83&lt;BR /&gt;    Free Extents By Size&lt;BR /&gt;        1:      17932      2:      12995      4:       2944      8:       7114&lt;BR /&gt;       16:       7979     32:       5079     64:       4187    128:       2692&lt;BR /&gt;      256:       1217    512:        390   1024:        105   2048:          8&lt;BR /&gt;     4096:          3   8192:          0  16384:          1  32768:          2&lt;BR /&gt;</description>
    <pubDate>Thu, 20 Jan 2005 15:36:29 GMT</pubDate>
    <dc:creator>David Poe_2</dc:creator>
    <dc:date>2005-01-20T15:36:29Z</dc:date>
    <item>
      <title>Filesystem Defragmentation Question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467921#M624482</link>
      <description>I have a 100 GB filesystem, which suffers terribly from our bad implementation of software which we hope to fix soon.  Anyways, there are several million files from 3 - 30k a piece, averaging 200,000 files in a directory and several hunder directories.  I understand that performance suffers because of the number of files in a directory, however, could fragmentation of inodes and directories be an issue as well?  About 35 GB of files a month are put onto the system, and another 35 GB are archived off to a tar/zip on a cheaper storage system.  The fsadm output is below.  Can someone explain what these numbers mean, and at what point would you want to defrag a filesystem?&lt;BR /&gt;&lt;BR /&gt;TIA!&lt;BR /&gt;&lt;BR /&gt;"fsadm -F vxfs -D -E /interfaces"&lt;BR /&gt;&lt;BR /&gt;  Directory Fragmentation Report&lt;BR /&gt;             Dirs        Total      Immed    Immeds   Dirs to   Blocks to&lt;BR /&gt;             Searched    Blocks     Dirs     to Add   Reduce    Reduce&lt;BR /&gt;  total          3280    218842       518       196      1255       68161&lt;BR /&gt;&lt;BR /&gt;  Extent Fragmentation Report&lt;BR /&gt;        Total    Average      Average     Total&lt;BR /&gt;        Files    File Blks    # Extents   Free Blks&lt;BR /&gt;     11450869           1           1     6725426&lt;BR /&gt;    blocks used for indirects: 3626&lt;BR /&gt;    % Free blocks in extents smaller than 64 blks: 23.99&lt;BR /&gt;    % Free blocks in extents smaller than  8 blks: 3.32&lt;BR /&gt;    % blks allocated to extents 64 blks or larger: 1.83&lt;BR /&gt;    Free Extents By Size&lt;BR /&gt;        1:      17932      2:      12995      4:       2944      8:       7114&lt;BR /&gt;       16:       7979     32:       5079     64:       4187    128:       2692&lt;BR /&gt;      256:       1217    512:        390   1024:        105   2048:          8&lt;BR /&gt;     4096:          3   8192:          0  16384:          1  32768:          2&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jan 2005 15:36:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467921#M624482</guid>
      <dc:creator>David Poe_2</dc:creator>
      <dc:date>2005-01-20T15:36:29Z</dc:date>
    </item>
    <item>
      <title>Re: Filesystem Defragmentation Question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467922#M624483</link>
      <description>I'd definetly look at using fsadm to reorg that stuff.  Another thing to think of is - just after weekend full backup - just before bringing back up yours apps (big assumptions in place there) - newfs that thing and restore from tape.  I'd do this at once in a while (every quarter,six months, or year or so) - just to give your files a file system which provides a (nods to the late Bob Ross) "happy place for your &lt;FILES&gt; to live."&lt;BR /&gt;&lt;BR /&gt;&lt;/FILES&gt;</description>
      <pubDate>Thu, 20 Jan 2005 17:58:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467922#M624483</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2005-01-20T17:58:15Z</dc:date>
    </item>
    <item>
      <title>Re: Filesystem Defragmentation Question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467923#M624484</link>
      <description>My experience with vxfs filesystems is that even when heavily fragmented the gains from frequent defragging operations are difficult to perceive and are generally difficult to measure. I've never seen them exceed 5% improvement even on filesystems that have gone for months without a defrag. It's really your large directories that are killing you.&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jan 2005 18:03:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467923#M624484</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2005-01-20T18:03:52Z</dc:date>
    </item>
    <item>
      <title>Re: Filesystem Defragmentation Question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467924#M624485</link>
      <description>Hi David,&lt;BR /&gt;&lt;BR /&gt;As rules go:&lt;BR /&gt;&lt;BR /&gt;1) Anytime you get into 6 digits or more in dir entries performance will *always* suffer.&lt;BR /&gt;You just can't cache or hache that many!&lt;BR /&gt;This is a *very* poor design &amp;amp; you need to change this.&lt;BR /&gt;&lt;BR /&gt;2) Fragmentation is not really a problem until you've approached/exceeded 90% usage at some point in time.&lt;BR /&gt;&lt;BR /&gt;3) Can't hurt (except CPU/disk usage) to run the actual defrag command &amp;amp; *always* use -d -D &amp;amp; -e -E when doing so so you can see the before/after results. And I generally run it *at least* twice. As it can be a step-by-step process to get to optimum. BUT with those dir entry numbers...hell it could take you 4-5 times or more.&lt;BR /&gt;&lt;BR /&gt;You REALLY need to trim those directories or frankly you'll be battling this problem - forever.&lt;BR /&gt;For details on the output run&lt;BR /&gt;man fsadm_vxfs&lt;BR /&gt;It'll explain it all.&lt;BR /&gt;&lt;BR /&gt;My 2 cents,&lt;BR /&gt;Jeff</description>
      <pubDate>Thu, 20 Jan 2005 18:11:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467924#M624485</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2005-01-20T18:11:09Z</dc:date>
    </item>
    <item>
      <title>Re: Filesystem Defragmentation Question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467925#M624486</link>
      <description>David,&lt;BR /&gt;I like to use the defrag as follows;&lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -deDE /interfaces.&lt;BR /&gt;-e  reorganizes &amp;amp; consolidates extents&lt;BR /&gt;-d  reorganizes &amp;amp; optimizes directories&lt;BR /&gt;-E  reports extent fragmentation&lt;BR /&gt;-D  reports directory fragmentation&lt;BR /&gt;&lt;BR /&gt;Under the Imends to Add, Dirs to Reduce &amp;amp; Blocks to Reduce, you want them as close to zero as possible.&lt;BR /&gt;&lt;BR /&gt;Under  % Free blocks in extents smaller than 64 blks: &amp;amp;  % Free blocks in extents smaller than 8 blks:, you want these numbers to be as low as possible.&lt;BR /&gt;&lt;BR /&gt;Under  % blks allocated to extents 64 blks or larger:, you want this as large as possible.&lt;BR /&gt;&lt;BR /&gt;Hope that this helps.</description>
      <pubDate>Thu, 20 Jan 2005 19:16:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467925#M624486</guid>
      <dc:creator>Sheriff Andy</dc:creator>
      <dc:date>2005-01-20T19:16:50Z</dc:date>
    </item>
    <item>
      <title>Re: Filesystem Defragmentation Question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467926#M624487</link>
      <description>Thanks for the info, it definately gives me something to look through.  I believe I will try the defrag, and make the defrag run at a lower priority so it doesn't kill disk/cpu.  All of our HP servers are running full throttle 80% of the time.  We are currently in the middle of putting in several new (and larger) HP servers to alleviate our load problem.  We also have a new project to change the way we store files, but that project won't be up and running for another year an a half, unfortunately.  The idea is to store several of these smaller files into one larger file.  In defense of the original architects, the intention was that we would not be anywhere near the load we are currently.&lt;BR /&gt;&lt;BR /&gt;Thanks again for the information!</description>
      <pubDate>Fri, 21 Jan 2005 12:01:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/filesystem-defragmentation-question/m-p/3467926#M624487</guid>
      <dc:creator>David Poe_2</dc:creator>
      <dc:date>2005-01-21T12:01:25Z</dc:date>
    </item>
  </channel>
</rss>

