- Community Home
- >
- Software
- >
- HPE Morpheus Software
- >
- HPE Morpheus VM Essentials
- >
- GFS2 Datastore mount problem
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago - last edited 3 weeks ago by support_s
4 weeks ago - last edited 3 weeks ago by support_s
GFS2 Datastore mount problem
Hell,
when trying to mount a GFS2 DataStore from a Alletra MP I'm getting this eror :
Unable to create / update pcs stonith: Error: Unable to perform restartless update of scsi devices: resource 'hpevm_gfs2_scsi' is not running on any node, please use command 'pcs stonith update' instead
how can this be solved?
- Tags:
- storage controller
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
3 weeks ago
Re: GFS2 Datastore mount problem
Hello,
We are having the same issue, but with an HPE MSA 2072. Ubuntu discovers the iSCSI luns properly across all hosts but VME won't allow us to create GFS2 filesystems.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
2 weeks ago
Re: GFS2 Datastore mount problem
Hi PLalonde,
so far I can see from the screenshot, there are only 2 hosts in that cluster. For the GFS2 datastore, you will need at least 3 ubuntu hosts in that cluster. With the 2 hosts, the only way for datastores is the NFS as there the min. requirement is at least 1 host.
With regards
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
2 weeks ago
Re: GFS2 Datastore mount problem
Thank you for the follow-up.
There are in fact three hosts, but only two of them were shown in that screenshot. The source of this error was an attempt to create a second GFS2 datastore on a second iSCSI LUN while the first GFS2 datastore creation operation was still in progress. The end result was that VME trashed the Pacemaker fencing configuration for the second GFS2 datastore and nothing I did could make it usable. I had to completely re-install all three hosts. Here are the logs from systemctl status pacemaker on each host:
? pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-11-20 07:47:47 EST; 5min ago
Docs: man:pacemakerd
https://clusterlabs.org/pacemaker/doc/
Main PID: 2752 (pacemakerd)
Tasks: 7
Memory: 41.8M (peak: 60.2M)
CPU: 3.590s
CGroup: /system.slice/pacemaker.service
+-2752 /usr/sbin/pacemakerd
+-2753 /usr/lib/pacemaker/pacemaker-based
+-2754 /usr/lib/pacemaker/pacemaker-fenced
+-2755 /usr/lib/pacemaker/pacemaker-execd
+-2756 /usr/lib/pacemaker/pacemaker-attrd
+-2757 /usr/lib/pacemaker/pacemaker-schedulerd
+-2758 /usr/lib/pacemaker/pacemaker-controld
Nov 20 07:51:07 vmehost1 pacemaker-controld[2758]: error: Unfencing of vmehost3 by vmehost2 failed (Error) with exit status 1
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: error: Operation 'on' [3439] targeting vmehost1 using hpevm_gfs2_scsi returned 1
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: warning: hpevm_gfs2_scsi[3439] [ /usr/sbin/fence_scsi:268: SyntaxWarning: invalid escape sequence '\s' ]
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: warning: hpevm_gfs2_scsi[3439] [ if not re.search(r"^" + dev + "\s+", out, flags=re.MULTILINE): ]
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: warning: hpevm_gfs2_scsi[3439] [ 2025-11-20 07:51:07,842 ERROR: Failed: device "/dev/disk/by-uuid/e366418a-e0c>
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: warning: hpevm_gfs2_scsi[3439] [ ]
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: warning: hpevm_gfs2_scsi[3439] [ 2025-11-20 07:51:07,842 ERROR: Please use '-h' for usage ]
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: warning: hpevm_gfs2_scsi[3439] [ ]
Nov 20 07:51:07 vmehost1 pacemaker-fenced[2754]: notice: Operation 'on' targeting vmehost1 by vmehost2 for pacemaker-controld.2779@vmehost2: Error occurred (co>
Nov 20 07:51:07 vmehost1 pacemaker-controld[2758]: error: Unfencing of vmehost1 by vmehost2 failed (Error) with exit status 1
~
? pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-11-20 07:47:34 EST; 7min ago
Docs: man:pacemakerd
https://clusterlabs.org/pacemaker/doc/
Main PID: 2773 (pacemakerd)
Tasks: 7
Memory: 58.6M (peak: 76.8M)
CPU: 4.803s
CGroup: /system.slice/pacemaker.service
+-2773 /usr/sbin/pacemakerd
+-2774 /usr/lib/pacemaker/pacemaker-based
+-2775 /usr/lib/pacemaker/pacemaker-fenced
+-2776 /usr/lib/pacemaker/pacemaker-execd
+-2777 /usr/lib/pacemaker/pacemaker-attrd
+-2778 /usr/lib/pacemaker/pacemaker-schedulerd
+-2779 /usr/lib/pacemaker/pacemaker-controld
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: notice: Fence operation 42 for vmehost3 failed: Agent returned error (aborting transition)
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: warning: Too many failures (13) to fence vmehost3, giving up
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: error: Unfencing of vmehost3 by vmehost2 failed (Error) with exit status 1
Nov 20 07:51:07 vmehost2 pacemaker-fenced[2775]: notice: Couldn't find anyone to fence (on) vmehost1 using any device
Nov 20 07:51:07 vmehost2 pacemaker-fenced[2775]: error: Operation 'on' targeting vmehost1 by vmehost2 for pacemaker-controld.2779@vmehost2: Error occurred (complete)
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: notice: Fence operation 41 for vmehost1 failed: Agent returned error (aborting transition)
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: warning: Too many failures (13) to fence vmehost1, giving up
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: error: Unfencing of vmehost1 by vmehost2 failed (Error) with exit status 1
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: notice: Transition 12 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=24, Source=/var/lib/pacemaker/pengine/p>
Nov 20 07:51:07 vmehost2 pacemaker-controld[2779]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE
telecom@vmehost2:~$
? pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-11-20 07:47:25 EST; 7min ago
Docs: man:pacemakerd
https://clusterlabs.org/pacemaker/doc/
Main PID: 2743 (pacemakerd)
Tasks: 7
Memory: 42.0M (peak: 60.1M)
CPU: 4.581s
CGroup: /system.slice/pacemaker.service
+-2743 /usr/sbin/pacemakerd
+-2745 /usr/lib/pacemaker/pacemaker-based
+-2746 /usr/lib/pacemaker/pacemaker-fenced
+-2747 /usr/lib/pacemaker/pacemaker-execd
+-2748 /usr/lib/pacemaker/pacemaker-attrd
+-2749 /usr/lib/pacemaker/pacemaker-schedulerd
+-2750 /usr/lib/pacemaker/pacemaker-controld
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: warning: hpevm_gfs2_scsi[3176] [ /usr/sbin/fence_scsi:268: SyntaxWarning: invalid escape sequence '\s' ]
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: warning: hpevm_gfs2_scsi[3176] [ if not re.search(r"^" + dev + "\s+", out, flags=re.MULTILINE): ]
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: warning: hpevm_gfs2_scsi[3176] [ 2025-11-20 07:51:07,817 ERROR: Failed: device "/dev/disk/by-uuid/e366418a-e0c6-410b-83c7-6>
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: warning: hpevm_gfs2_scsi[3176] [ ]
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: warning: hpevm_gfs2_scsi[3176] [ 2025-11-20 07:51:07,817 ERROR: Please use '-h' for usage ]
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: warning: hpevm_gfs2_scsi[3176] [ ]
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: notice: Operation 'on' targeting vmehost3 by vmehost2 for pacemaker-controld.2779@vmehost2: Error occurred (complete)
Nov 20 07:51:07 vmehost3 pacemaker-controld[2750]: error: Unfencing of vmehost3 by vmehost2 failed (Error) with exit status 1
Nov 20 07:51:07 vmehost3 pacemaker-fenced[2746]: notice: Operation 'on' targeting vmehost1 by vmehost2 for pacemaker-controld.2779@vmehost2: Error occurred (complete)
Nov 20 07:51:07 vmehost3 pacemaker-controld[2750]: error: Unfencing of vmehost1 by vmehost2 failed (Error) with exit status 1
It should be possible to create a second GFS2 datastore while the first one is being created, otherwise a warning and a wait measure would be appropriate.
Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
2 weeks ago
Re: GFS2 Datastore mount problem
is it now possible for an remote session, if yes send a maill to hpe-sw-trial-vmessentials@hpe.com and we will have a look.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
2 weeks ago
Re: GFS2 Datastore mount problem
Thank you for the offer of assistance, but we went ahead and re-installed Ubuntu / VME on all three affected hosts. Once we created the first GFS2 datastore, we allowed it to fully complete before attempting to create the second one, and that was successful.
Paul