<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Out of inodes on GFS file System in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265833#M52785</link>
    <description>We are running Red Hat 5.5, and our file systems are using GFS...&lt;BR /&gt;&lt;BR /&gt;We have a file system that has run out of inodes... How can this be increased for a GFS file system ??&lt;BR /&gt;&lt;BR /&gt;# gfs_tool df /home/smonitor&lt;BR /&gt;/home/smonitor:&lt;BR /&gt;  SB lock proto = "lock_dlm"&lt;BR /&gt;  SB lock table = "APSMON:gfs08"&lt;BR /&gt;  SB ondisk format = 1309&lt;BR /&gt;  SB multihost format = 1401&lt;BR /&gt;  Block size = 4096&lt;BR /&gt;  Journals = 3&lt;BR /&gt;  Resource Groups = 60&lt;BR /&gt;  Mounted lock proto = "lock_dlm"&lt;BR /&gt;  Mounted lock table = "APSMON:gfs08"&lt;BR /&gt;  Mounted host data = "jid=2:id=393219:first=0"&lt;BR /&gt;  Journal number = 2&lt;BR /&gt;  Lock module flags = 0&lt;BR /&gt;  Local flocks = FALSE&lt;BR /&gt;  Local caching = FALSE&lt;BR /&gt;  Oopses OK = FALSE&lt;BR /&gt;&lt;BR /&gt;  Type           Total Blocks   Used Blocks    Free Blocks    use%&lt;BR /&gt;  ------------------------------------------------------------------------&lt;BR /&gt;  inodes         2580593        2580593        0              100%&lt;BR /&gt;  metadata       1165229        113033         1052196        10%&lt;BR /&gt;  data           87714          86975          739            99%&lt;BR /&gt;#</description>
    <pubDate>Wed, 08 Dec 2010 14:39:15 GMT</pubDate>
    <dc:creator>MikeL_4</dc:creator>
    <dc:date>2010-12-08T14:39:15Z</dc:date>
    <item>
      <title>Out of inodes on GFS file System</title>
      <link>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265833#M52785</link>
      <description>We are running Red Hat 5.5, and our file systems are using GFS...&lt;BR /&gt;&lt;BR /&gt;We have a file system that has run out of inodes... How can this be increased for a GFS file system ??&lt;BR /&gt;&lt;BR /&gt;# gfs_tool df /home/smonitor&lt;BR /&gt;/home/smonitor:&lt;BR /&gt;  SB lock proto = "lock_dlm"&lt;BR /&gt;  SB lock table = "APSMON:gfs08"&lt;BR /&gt;  SB ondisk format = 1309&lt;BR /&gt;  SB multihost format = 1401&lt;BR /&gt;  Block size = 4096&lt;BR /&gt;  Journals = 3&lt;BR /&gt;  Resource Groups = 60&lt;BR /&gt;  Mounted lock proto = "lock_dlm"&lt;BR /&gt;  Mounted lock table = "APSMON:gfs08"&lt;BR /&gt;  Mounted host data = "jid=2:id=393219:first=0"&lt;BR /&gt;  Journal number = 2&lt;BR /&gt;  Lock module flags = 0&lt;BR /&gt;  Local flocks = FALSE&lt;BR /&gt;  Local caching = FALSE&lt;BR /&gt;  Oopses OK = FALSE&lt;BR /&gt;&lt;BR /&gt;  Type           Total Blocks   Used Blocks    Free Blocks    use%&lt;BR /&gt;  ------------------------------------------------------------------------&lt;BR /&gt;  inodes         2580593        2580593        0              100%&lt;BR /&gt;  metadata       1165229        113033         1052196        10%&lt;BR /&gt;  data           87714          86975          739            99%&lt;BR /&gt;#</description>
      <pubDate>Wed, 08 Dec 2010 14:39:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265833#M52785</guid>
      <dc:creator>MikeL_4</dc:creator>
      <dc:date>2010-12-08T14:39:15Z</dc:date>
    </item>
    <item>
      <title>Re: Out of inodes on GFS file System</title>
      <link>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265834#M52786</link>
      <description>Looking further, all of our gfs file systems show this 100% on inodes, so I don't think that is an issue....&lt;BR /&gt;&lt;BR /&gt;The df command shows inodes are ok...&lt;BR /&gt;# df -i /home/smonitor&lt;BR /&gt;Filesystem            Inodes   IUsed   IFree IUse% Mounted on&lt;BR /&gt;/dev/mapper/apsmonVG-lvol8&lt;BR /&gt;                     3633489 2582857 1050632   72% /home/smonitor&lt;BR /&gt;#&lt;BR /&gt;&lt;BR /&gt;The df on the file system shows 73% full, but the gfs_tool shows 99%.... Not sure what to believe any more trying to figure this out..&lt;BR /&gt;&lt;BR /&gt;# df -h /home/smonitor&lt;BR /&gt;Filesystem            Size  Used Avail Use% Mounted on&lt;BR /&gt;/dev/mapper/apsmonVG-lvol8&lt;BR /&gt;                       15G   11G  4.1G  73% /home/smonitor&lt;BR /&gt;#</description>
      <pubDate>Wed, 08 Dec 2010 15:03:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265834#M52786</guid>
      <dc:creator>MikeL_4</dc:creator>
      <dc:date>2010-12-08T15:03:55Z</dc:date>
    </item>
    <item>
      <title>Re: Out of inodes on GFS file System</title>
      <link>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265835#M52787</link>
      <description>WHat option did you use to create your GFS2 filesystem? If you planned on using it for zillions for files -- you should have chosen mail storage or something...&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Dec 2010 15:15:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265835#M52787</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2010-12-08T15:15:49Z</dc:date>
    </item>
    <item>
      <title>Re: Out of inodes on GFS file System</title>
      <link>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265836#M52788</link>
      <description>Hi Mike,&lt;BR /&gt;&lt;BR /&gt;The below Redhat article has explanation about why gfs becomes slow when filesystem is close to 100%. The gfs tool output actually gives the number of resource groups.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://access.redhat.com/kb/docs/DOC-6479" target="_blank"&gt;https://access.redhat.com/kb/docs/DOC-6479&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://access.redhat.com/kb/docs/DOC-6466" target="_blank"&gt;https://access.redhat.com/kb/docs/DOC-6466&lt;/A&gt; ==&amp;gt; this article has information on why there is mismatch between df &amp;amp; du outputs.</description>
      <pubDate>Wed, 08 Dec 2010 15:18:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265836#M52788</guid>
      <dc:creator>Chhaya_Z</dc:creator>
      <dc:date>2010-12-08T15:18:19Z</dc:date>
    </item>
    <item>
      <title>Re: Out of inodes on GFS file System</title>
      <link>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265837#M52789</link>
      <description>Found the solution in using command:&lt;BR /&gt;&lt;BR /&gt;gfs_tool reclaim &lt;MOUNT_POINT&gt;&lt;/MOUNT_POINT&gt;</description>
      <pubDate>Thu, 09 Dec 2010 13:56:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/out-of-inodes-on-gfs-file-system/m-p/5265837#M52789</guid>
      <dc:creator>MikeL_4</dc:creator>
      <dc:date>2010-12-09T13:56:30Z</dc:date>
    </item>
  </channel>
</rss>

