HPE Morpheus VM Essentials
1843930 Members
1404 Online
110226 Solutions
New Discussion

Unable to Write Heartbeat

 
dya
Valued Contributor

Unable to Write Heartbeat

※Apologies if this is difficult to understand due to machine translation.

A log indicating that the heartbeat file could not be updated on a node that was not experiencing an outage was output.
Furthermore, as described below, the virtual machine actually running on that node was shut down.
It is questionable that the outage of one node out of three would affect the remaining two nodes. Do you know the cause?

■Configuration
 ・iSCSI multipathing, data store is GFS2
 ・3 nodes(vme-iscsi1, vme-iscsi2, vme-iscsi3)
 ・Version 8.0.11

■Occurred Event
When powering off one node (vme-iscsi2), the heartbeat file within the datastore could no longer be updated.
※This occurred with all nodes sometimes, but only one node at other times

.■Verified During Occurrence

 

Spoiler
・pcs status
Failed Fencing Actions:
* reboot of vme-iscsi2 for stonith-api.8376@vme-iscsi1 last failed at 2025-12-13 13:59:54.555967 +09:00
* reboot of vme-iscsi2 for stonith-api.7125@vme-iscsi3 last failed at 2025-12-13 13:59:53.498952 +09:00

※ Logs indicating fencing failure continue to appear

 

 

 

Spoiler

・tail -F /var/log/morpheus-node/morphd/current | grep -i heart
2025-12-13_05:06:32.36849 22:06:32.368 [pool-3-thread-1] WARN c.m.agent.stats.MvmHeartbeatFailover - Unable to Write Heartbeat for host vme-iscsi3 -
2025-12-13_05:06:32.39100 22:06:32.390 [pool-3-thread-4] ERROR c.m.agent.stats.MvmHeartbeatFailover - Error writing heartbeat file for host vme-iscsi3 -
2025-12-13_05:06:32.39108 22:06:32.390 [pool-3-thread-4] ERROR c.m.agent.stats.MvmHeartbeatFailover - Failed to write heartbeat file for host vme-iscsi3, on all heartbeat targets.
2025-12-13_05:06:32.39113 22:06:32.390 [pool-3-thread-4] ERROR c.m.agent.stats.MvmHeartbeatFailover - All heartbeat datastore paths have been unhealthy for host vme-iscsi3 for 6 checks. Shutting down all VMs to protect data integrity. 

★Output indicating “Shutting down all VMs” is present

 

Spoiler
# sg_persist --in --read-keys --device /dev/mapper/3600140543ab5de26fd845efafcad8932
LIO-ORG block_backend_v 4.0
Peripheral device type: disk
PR generation=0xd, 6 registered reservation keys follow:
0x570ae961
0x570ae961
0x570a0045
0x570a0045
0x570a5aa2
0x570a5aa2

★keys remained registered for all three nodes

 

 

Spoiler
■After executing “pcs stonith fence vme-iscsi2” due to persistent fencing failures
2025-12-13_05:11:02.49398 22:11:02.493 [pool-3-thread-5] INFO c.m.agent.stats.MvmHeartbeatFailover - Heartbeat last updated: Fri Dec 12 22:10:42 MST 2025
2025-12-13_05:11:02.57332 22:11:02.573 [pool-3-thread-5] INFO c.m.agent.stats.MvmHeartbeatFailover - Wrote heartbeat file for host vme-iscsi3 at Fri Dec 12 22:11:02 MST 2025

 

★Heartbeat file can now be updated