HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Orphan Inode and Maximal mount count reached
Operating System - Linux
1828073
Members
2564
Online
109974
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2007 03:24 PM
03-06-2007 03:24 PM
Orphan Inode and Maximal mount count reached
All,
We have a VMWARE EXS 3.0.1 enviroment with 3 hosts running the modified VMWARE version of Linux 2.4.21-37.0.2 on the system consoles .
We recently noticed the following message appearing in the /var/log/messages log and on the console.
"EXT2-fs warning: maximal mount count reached, running e2fsck is recommended".
To remedy this we moved all the guest VM's away from the host in question and went to single user mode and ran e2fsck'd on each of the mount points.
Unfortunatly the message did not go away, we were then advised to increase the mount count to 100 with the tune2fs -c command. This has stopped the message but we are still concerned that there may be some corruption within the filesystem.
We have also since noticed that the tune2fs -l /dev/sda2 command is reporting a number in the First Orphan Inode see below.
[root@bgwpvmx1 log]# tune2fs -l /dev/sda2
tune2fs 1.32 (09-Nov-2002)
Filesystem volume name: /
Last mounted on:
Filesystem UUID: 46e1fb3a-545d-465c-9224-e2cf471affd9
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal filetype needs_recovery sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1281696
Block count: 2560359
Reserved block count: 128017
Free blocks: 2091686
Free inodes: 1226420
First block: 0
Block size: 4096
Fragment size: 4096
[root@bgwpvmx1 log]#
Fragments per group: 32768
Inodes per group: 16224
Inode blocks per group: 507
Filesystem created: Wed Nov 8 22:11:58 2006
Last mount time: Mon Mar 5 17:55:38 2007
Last write time: Mon Mar 5 17:55:38 2007
Mount count: 9
Maximum mount count: -1
Last checked: Wed Nov 8 22:11:58 2006
Check interval: 0 ()
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal UUID:
Journal inode: 8
Journal device: 0x0000
First orphan inode: 1038577
We would appreciate any advice?.
Regards
Colin
We have a VMWARE EXS 3.0.1 enviroment with 3 hosts running the modified VMWARE version of Linux 2.4.21-37.0.2 on the system consoles .
We recently noticed the following message appearing in the /var/log/messages log and on the console.
"EXT2-fs warning: maximal mount count reached, running e2fsck is recommended".
To remedy this we moved all the guest VM's away from the host in question and went to single user mode and ran e2fsck'd on each of the mount points.
Unfortunatly the message did not go away, we were then advised to increase the mount count to 100 with the tune2fs -c command. This has stopped the message but we are still concerned that there may be some corruption within the filesystem.
We have also since noticed that the tune2fs -l /dev/sda2 command is reporting a number in the First Orphan Inode see below.
[root@bgwpvmx1 log]# tune2fs -l /dev/sda2
tune2fs 1.32 (09-Nov-2002)
Filesystem volume name: /
Last mounted on:
Filesystem UUID: 46e1fb3a-545d-465c-9224-e2cf471affd9
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal filetype needs_recovery sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1281696
Block count: 2560359
Reserved block count: 128017
Free blocks: 2091686
Free inodes: 1226420
First block: 0
Block size: 4096
Fragment size: 4096
[root@bgwpvmx1 log]#
Fragments per group: 32768
Inodes per group: 16224
Inode blocks per group: 507
Filesystem created: Wed Nov 8 22:11:58 2006
Last mount time: Mon Mar 5 17:55:38 2007
Last write time: Mon Mar 5 17:55:38 2007
Mount count: 9
Maximum mount count: -1
Last checked: Wed Nov 8 22:11:58 2006
Check interval: 0 (
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal UUID:
Journal inode: 8
Journal device: 0x0000
First orphan inode: 1038577
We would appreciate any advice?.
Regards
Colin
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2007 02:32 AM
03-07-2007 02:32 AM
Re: Orphan Inode and Maximal mount count reached
Shalom Colin,
If the system can pass the fsck that occurs when the vm machine starts and all applications that use the filesystem start without error there is likely no data corruption.
I believe the problem is solved and you can move on to something a bit more fun.
SEP
If the system can pass the fsck that occurs when the vm machine starts and all applications that use the filesystem start without error there is likely no data corruption.
I believe the problem is solved and you can move on to something a bit more fun.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2007 07:14 PM
03-07-2007 07:14 PM
Re: Orphan Inode and Maximal mount count reached
We recently noticed the following message appearing in the /var/log/messages log and on the console.
"EXT2-fs warning: maximal mount count reached, running e2fsck is recommended".
To remedy this we moved all the guest VM's away from the host in question and went to single user mode and ran e2fsck'd on each of the mount points.
It seems strange for me because:
1) by default linux boot script *runs* fsck if maximal mount count reached
2) running fsck should reset "mounts counter" to 0.
As far as I remember, dumpe2fs should show mount counter.
"EXT2-fs warning: maximal mount count reached, running e2fsck is recommended".
To remedy this we moved all the guest VM's away from the host in question and went to single user mode and ran e2fsck'd on each of the mount points.
It seems strange for me because:
1) by default linux boot script *runs* fsck if maximal mount count reached
2) running fsck should reset "mounts counter" to 0.
As far as I remember, dumpe2fs should show mount counter.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2007 09:30 PM
03-07-2007 09:30 PM
Re: Orphan Inode and Maximal mount count reached
Note that the maximum mount count in the tune2fs listing in the original post appears to be -1.
To disable the checks, you should set the value to 0 (according to "man tune2fs").
It might be that the value -1 causes the filesystem to be checked fully at *every* boot, as it is smaller than any value the mount counter can have.
To disable the checks, you should set the value to 0 (according to "man tune2fs").
It might be that the value -1 causes the filesystem to be checked fully at *every* boot, as it is smaller than any value the mount counter can have.
MK
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP