- Community Home
- >
- Software
- >
- HPE Morpheus Software
- >
- HPE Morpheus VM Essentials
- >
- Any known issues with GFS2 Pools and Reservation C...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Friday - last edited Tuesday by support_s
Friday - last edited Tuesday by support_s
Any known issues with GFS2 Pools and Reservation Conflicts on all 3 Nodes of the Cluster.
I have been having issues with the GFS2 cluster on 3 nodes of HPE-VM whereby the GFS2 pool was set up with the Morpheus WebUI.
At first worked seemlessly and then there started reservation confilicts and then restarting various sevices etc and disabling and re-enabling the clone got the GFS2 back mounted and everything seemed fine until the power on of a VMn and then the access was dropped.
Literally then just migrated the VM to the next node that could read the GFS2 POOL and then once trying to power on the VM i/o errors and lost access.
Everynode that failed had the same message in the journalctl
Sep 19 13:19:26 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: Error 6 writing to journal, jid=2
Sep 19 13:19:26 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: about to withdraw this file system
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: Requesting recovery of jid 2.
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: Journal recovery complete for jid 2.
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: Glock dequeues delayed: 0
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: telling LM to unmount
Sep 19 13:19:31 ih-hpe24node3 kernel: dlm: gfs2pool: leaving the lockspace group...
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: recover_prep ignored due to withdraw.
Sep 19 13:19:31 ih-hpe24node3 kernel: dlm: gfs2pool: group event done 0
Sep 19 13:19:31 ih-hpe24node3 kernel: dlm: gfs2pool: release_lockspace final free
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2: fsid=21k17fiak9rbbo2:gfs2pool.2: File system withdrawn
Sep 19 13:19:31 ih-hpe24node3 kernel: CPU: 77 PID: 56168 Comm: gfs2_logd/21k17 Not tainted 6.8.0-83-generic #83-Ubuntu
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2_withdraw+0xd7/0x160 [gfs2]
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2_log_flush+0x66d/0xb00 [gfs2]
Sep 19 13:19:31 ih-hpe24node3 kernel: gfs2_logd+0x90/0x330 [gfs2]
Sep 19 13:19:31 ih-hpe24node3 kernel: ? __pfx_gfs2_logd+0x10/0x10 [gfs2]
Whistl at the same time all 3 nodes have iscsi sessions to the storage + multipath is showing 4 paths to each node.
Seemed to then have this re-occuring error on 1 node
root@ih-hpe24node1:/# journalctl -u pacemaker -u corosync -f
Sep 19 13:47:18 ih-hpe24node1 fence_scsi[160930]: Please use '-h' for usage
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: error: Operation 'reboot' [160929] targeting ih-hpe24node1 using hpevm_gfs2_scsi returned 1
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: warning: hpevm_gfs2_scsi[160929] [ /usr/sbin/fence_scsi:268: SyntaxWarning: invalid escape sequence '\s' ]
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: warning: hpevm_gfs2_scsi[160929] [ if not re.search(r"^" + dev + "\s+", out, flags=re.MULTILINE): ]
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: warning: hpevm_gfs2_scsi[160929] [ 2025-09-19 13:47:18,051 ERROR: Failed: keys cannot be same. You can not fence yourself. ]
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: warning: hpevm_gfs2_scsi[160929] [ ]
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: warning: hpevm_gfs2_scsi[160929] [ 2025-09-19 13:47:18,051 ERROR: Please use '-h' for usage ]
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: warning: hpevm_gfs2_scsi[160929] [ ]
Sep 19 13:47:18 ih-hpe24node1 pacemaker-fenced[158298]: notice: Operation 'reboot' targeting ih-hpe24node1 by ih-hpe24node3 for pacemaker-controld.3266@ih-hpe24node3: Error occurred (complete)
Sep 19 13:47:18 ih-hpe24node1 pacemaker-controld[158302]: notice: Peer ih-hpe24node1 was not terminated (reboot) by ih-hpe24node3 on behalf of pacemaker-controld.3266@ih-hpe24node3: Error
I have torn down the GFS2 and started again and will see if I encounter the same instablitly.
Current version I am using is:
v. 8.0.7-2
Description: Ubuntu 24.04.3 LTS
Release: 24.04
Codename: noble
Will see how I get on with a new deployment of a GFS2 POOL - I did have HW issues on one host - However given a 3 node cluster I would expect it to be able to tolerate the loss of 1 node.
- Tags:
- Operating System