<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Orphan Inode and Maximal mount count reached in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956985#M27307</link>
    <description>All,&lt;BR /&gt;&lt;BR /&gt;We have a VMWARE EXS 3.0.1 enviroment with 3 hosts running the  modified VMWARE version of Linux 2.4.21-37.0.2 on the system consoles .&lt;BR /&gt;&lt;BR /&gt;We recently noticed the following message appearing in the /var/log/messages log and on the console.&lt;BR /&gt;"EXT2-fs warning: maximal mount count reached, running e2fsck is recommended".&lt;BR /&gt;&lt;BR /&gt;To remedy this we moved all the guest VM's away from the host in question and went to single user mode and ran e2fsck'd on each of the mount points.&lt;BR /&gt;&lt;BR /&gt;Unfortunatly the message did not go away, we  were then advised to increase the mount count to 100 with the tune2fs -c command. This has stopped the message but we are still concerned that there may be some corruption within the filesystem. &lt;BR /&gt;&lt;BR /&gt;We have also since noticed that the tune2fs -l /dev/sda2 command is reporting a number in the First Orphan Inode see below.&lt;BR /&gt;[root@bgwpvmx1 log]# tune2fs -l /dev/sda2&lt;BR /&gt;tune2fs 1.32 (09-Nov-2002)&lt;BR /&gt;Filesystem volume name:   /&lt;BR /&gt;Last mounted on:          &lt;NOT available=""&gt;&lt;BR /&gt;Filesystem UUID:          46e1fb3a-545d-465c-9224-e2cf471affd9&lt;BR /&gt;Filesystem magic number:  0xEF53&lt;BR /&gt;Filesystem revision #:    1 (dynamic)&lt;BR /&gt;Filesystem features:      has_journal filetype needs_recovery sparse_super&lt;BR /&gt;Default mount options:    (none)&lt;BR /&gt;Filesystem state:         clean&lt;BR /&gt;Errors behavior:          Continue&lt;BR /&gt;Filesystem OS type:       Linux&lt;BR /&gt;Inode count:              1281696&lt;BR /&gt;Block count:              2560359&lt;BR /&gt;Reserved block count:     128017&lt;BR /&gt;Free blocks:              2091686&lt;BR /&gt;Free inodes:              1226420&lt;BR /&gt;First block:              0&lt;BR /&gt;Block size:               4096&lt;BR /&gt;Fragment size:            4096&lt;BR /&gt;[root@bgwpvmx1 log]#&lt;BR /&gt;Fragments per group:      32768&lt;BR /&gt;Inodes per group:         16224&lt;BR /&gt;Inode blocks per group:   507&lt;BR /&gt;Filesystem created:       Wed Nov  8 22:11:58 2006&lt;BR /&gt;Last mount time:          Mon Mar  5 17:55:38 2007&lt;BR /&gt;Last write time:          Mon Mar  5 17:55:38 2007&lt;BR /&gt;Mount count:              9&lt;BR /&gt;Maximum mount count:      -1&lt;BR /&gt;Last checked:             Wed Nov  8 22:11:58 2006&lt;BR /&gt;Check interval:           0 (&lt;NONE&gt;)&lt;BR /&gt;Reserved blocks uid:      0 (user root)&lt;BR /&gt;Reserved blocks gid:      0 (group root)&lt;BR /&gt;First inode:              11&lt;BR /&gt;Inode size:               128&lt;BR /&gt;Journal UUID:             &lt;NONE&gt;&lt;BR /&gt;Journal inode:            8&lt;BR /&gt;Journal device:           0x0000&lt;BR /&gt;First orphan inode:       1038577&lt;BR /&gt;&lt;BR /&gt;We would appreciate any advice?.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Colin&lt;/NONE&gt;&lt;/NONE&gt;&lt;/NOT&gt;</description>
    <pubDate>Tue, 06 Mar 2007 23:24:40 GMT</pubDate>
    <dc:creator>Shayne Ludlow</dc:creator>
    <dc:date>2007-03-06T23:24:40Z</dc:date>
    <item>
      <title>Orphan Inode and Maximal mount count reached</title>
      <link>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956985#M27307</link>
      <description>All,&lt;BR /&gt;&lt;BR /&gt;We have a VMWARE EXS 3.0.1 enviroment with 3 hosts running the  modified VMWARE version of Linux 2.4.21-37.0.2 on the system consoles .&lt;BR /&gt;&lt;BR /&gt;We recently noticed the following message appearing in the /var/log/messages log and on the console.&lt;BR /&gt;"EXT2-fs warning: maximal mount count reached, running e2fsck is recommended".&lt;BR /&gt;&lt;BR /&gt;To remedy this we moved all the guest VM's away from the host in question and went to single user mode and ran e2fsck'd on each of the mount points.&lt;BR /&gt;&lt;BR /&gt;Unfortunatly the message did not go away, we  were then advised to increase the mount count to 100 with the tune2fs -c command. This has stopped the message but we are still concerned that there may be some corruption within the filesystem. &lt;BR /&gt;&lt;BR /&gt;We have also since noticed that the tune2fs -l /dev/sda2 command is reporting a number in the First Orphan Inode see below.&lt;BR /&gt;[root@bgwpvmx1 log]# tune2fs -l /dev/sda2&lt;BR /&gt;tune2fs 1.32 (09-Nov-2002)&lt;BR /&gt;Filesystem volume name:   /&lt;BR /&gt;Last mounted on:          &lt;NOT available=""&gt;&lt;BR /&gt;Filesystem UUID:          46e1fb3a-545d-465c-9224-e2cf471affd9&lt;BR /&gt;Filesystem magic number:  0xEF53&lt;BR /&gt;Filesystem revision #:    1 (dynamic)&lt;BR /&gt;Filesystem features:      has_journal filetype needs_recovery sparse_super&lt;BR /&gt;Default mount options:    (none)&lt;BR /&gt;Filesystem state:         clean&lt;BR /&gt;Errors behavior:          Continue&lt;BR /&gt;Filesystem OS type:       Linux&lt;BR /&gt;Inode count:              1281696&lt;BR /&gt;Block count:              2560359&lt;BR /&gt;Reserved block count:     128017&lt;BR /&gt;Free blocks:              2091686&lt;BR /&gt;Free inodes:              1226420&lt;BR /&gt;First block:              0&lt;BR /&gt;Block size:               4096&lt;BR /&gt;Fragment size:            4096&lt;BR /&gt;[root@bgwpvmx1 log]#&lt;BR /&gt;Fragments per group:      32768&lt;BR /&gt;Inodes per group:         16224&lt;BR /&gt;Inode blocks per group:   507&lt;BR /&gt;Filesystem created:       Wed Nov  8 22:11:58 2006&lt;BR /&gt;Last mount time:          Mon Mar  5 17:55:38 2007&lt;BR /&gt;Last write time:          Mon Mar  5 17:55:38 2007&lt;BR /&gt;Mount count:              9&lt;BR /&gt;Maximum mount count:      -1&lt;BR /&gt;Last checked:             Wed Nov  8 22:11:58 2006&lt;BR /&gt;Check interval:           0 (&lt;NONE&gt;)&lt;BR /&gt;Reserved blocks uid:      0 (user root)&lt;BR /&gt;Reserved blocks gid:      0 (group root)&lt;BR /&gt;First inode:              11&lt;BR /&gt;Inode size:               128&lt;BR /&gt;Journal UUID:             &lt;NONE&gt;&lt;BR /&gt;Journal inode:            8&lt;BR /&gt;Journal device:           0x0000&lt;BR /&gt;First orphan inode:       1038577&lt;BR /&gt;&lt;BR /&gt;We would appreciate any advice?.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Colin&lt;/NONE&gt;&lt;/NONE&gt;&lt;/NOT&gt;</description>
      <pubDate>Tue, 06 Mar 2007 23:24:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956985#M27307</guid>
      <dc:creator>Shayne Ludlow</dc:creator>
      <dc:date>2007-03-06T23:24:40Z</dc:date>
    </item>
    <item>
      <title>Re: Orphan Inode and Maximal mount count reached</title>
      <link>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956986#M27308</link>
      <description>Shalom Colin,&lt;BR /&gt;&lt;BR /&gt;If the system can pass the fsck that occurs when the vm machine starts and all applications that use the filesystem start without error there is likely no data corruption.&lt;BR /&gt;&lt;BR /&gt;I believe the problem is solved and you can move on to something a bit more fun.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Wed, 07 Mar 2007 10:32:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956986#M27308</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2007-03-07T10:32:35Z</dc:date>
    </item>
    <item>
      <title>Re: Orphan Inode and Maximal mount count reached</title>
      <link>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956987#M27309</link>
      <description>We recently noticed the following message appearing in the /var/log/messages log and on the console.&lt;BR /&gt;"EXT2-fs warning: maximal mount count reached, running e2fsck is recommended".&lt;BR /&gt;&lt;BR /&gt;To remedy this we moved all the guest VM's away from the host in question and went to single user mode and ran e2fsck'd on each of the mount points.&lt;BR /&gt;&lt;BR /&gt;It seems strange for me because:&lt;BR /&gt;1) by default linux boot script *runs* fsck if maximal mount count reached&lt;BR /&gt;2) running fsck should reset "mounts counter" to 0.&lt;BR /&gt;&lt;BR /&gt;As far as I remember, dumpe2fs  should show mount counter.</description>
      <pubDate>Thu, 08 Mar 2007 03:14:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956987#M27309</guid>
      <dc:creator>Vitaly Karasik_1</dc:creator>
      <dc:date>2007-03-08T03:14:37Z</dc:date>
    </item>
    <item>
      <title>Re: Orphan Inode and Maximal mount count reached</title>
      <link>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956988#M27310</link>
      <description>Note that the maximum mount count in the tune2fs listing in the original post appears to be -1.&lt;BR /&gt;&lt;BR /&gt;To disable the checks, you should set the value to 0 (according to "man tune2fs").&lt;BR /&gt;&lt;BR /&gt;It might be that the value -1 causes the filesystem to be checked fully at *every* boot, as it is smaller than any value the mount counter can have.</description>
      <pubDate>Thu, 08 Mar 2007 05:30:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/orphan-inode-and-maximal-mount-count-reached/m-p/3956988#M27310</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2007-03-08T05:30:40Z</dc:date>
    </item>
  </channel>
</rss>

