- Community Home
- >
- Software
- >
- HPE Morpheus Software
- >
- HPE Morpheus VM Essentials
- >
- Re: When a host goes down, the VM does not start o...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-10-2025 10:32 PM
06-10-2025 10:32 PM
In the cluster settings, "Automatically power on vms" is enabled. In "Managemnt Placement" for each VM, "Auto" or "Failover" is set. The storage for each VM is set to NFS storage.
For the NFS storage, "Heatbeat target" is enabled. I thought that with the above settings, if the host on which the VM was running stopped, it would start on another host.
However, although the VM seemed to move to another host, it did not start. What is the cause?
By the way,I ran
virsh dominfo <vm name> and it said "Autostart" was disabled. Is this correct?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2025 01:52 AM
06-12-2025 01:52 AM
Re: When a host goes down, the VM does not start on another host.
Hello Kurton,
I understood that this issue you resolved via below thread or is it different?
https://community.hpe.com/t5/hpe-morpheus-vm-essentials/after-host-goes-down-vm-does-not-start-automatically/m-p/7244501
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2025 05:33 PM
06-12-2025 05:33 PM
Re: When a host goes down, the VM does not start on another host.
No, that's not the case.
The thread you refer to is a problem that occurred when the state of this thread was reached.
Note that if you reboot the hypervisor host (ubuntu) with the reboot command, the VMs that were running on that host would start on another host.
The problem in this thread occurred when the hypervisor host was forced to stop by pressing and holding the power button.
If my settings are correct, it seems that 8.06 has been released, so I would like to rebuild my environment and try again.
I would like any comments on whether my settings are correct.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2025 07:45 PM
06-12-2025 07:45 PM
Re: When a host goes down, the VM does not start on another host.
When I try to start the VM manually I get the following message:
virsh # start CentOS9-XCP01
error: Failed to start domain 'CentOS9-XCP01'
error: internal error: process exited while connecting to monitor: 2025-06-13T02:39:56.816448Z qemu-system-x86_64: -blockdev {"no
de-name":"libvirt-1-format","read-only":false,"discard":"unmap","cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":
"libvirt-1-storage","backing":null}: Failed to get "write" lock
Is another process using the image [/mnt/f9a16a99-546e-4697-aba2-dfa57be1674a/CentOS9-XCP01/hpevm_14-disk-0]?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-12-2025 11:13 PM
06-12-2025 11:13 PM
Re: When a host goes down, the VM does not start on another host.
Hello Kurton,
This usually happens due to one of the following:
1. The VM is already running, but in a hung or invisible state
2. A stale QEMU process or libvirt lock is holding onto the image
3. A file descriptor was not released properly due to an unclean shutdown
4. Filesystem or NFS storage backend issues (less common in GFS2 setups)
Resolution Steps
1. Check if VM is Already Running
#virsh list --all
#ps aux | grep qemu
If found, kill the related QEMU process:
kill -9 <PID>
2. Check for Active Locks on the Disk File
lsof /mnt/f9a16a99-546e-4697-aba2-dfa57be1674a/CentOS9-XCP01/hpevm_14-disk-0
or
fuser /mnt/f9a16a99-546e-4697-aba2-dfa57be1674a/CentOS9-XCP01/hpevm_14-disk-0
If a process is holding the file, confirm what it is and terminate if appropriate.
3. Check for Libvirt Locks
Libvirt may use a lock manager. Check:
ls -l /var/lock/libvirt
You can safely delete stale locks if you're sure the VM is not running.
4. Force Cleanup of the VM (libvirt)
If everything else fails:
virsh destroy CentOS9-XCP01
virsh undefine CentOS9-XCP01
# Recreate the VM definition if needed using Morpheus GUI
## If still issues exists or if this solution didn't help, please email to "hpe-sw-trial-vmessentials@hpe.com" with detailed information and let us know your "availability" time to take remote session via MS-Team to assist you.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-15-2025 06:31 PM
06-15-2025 06:31 PM
Re: When a host goes down, the VM does not start on another host.
Thank you for your reply.
I will be able to check the details the day after tomorrow, so I will report the results later.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2025 09:49 PM
06-19-2025 09:49 PM
Re: When a host goes down, the VM does not start on another host.
I will share the operation log.
There was no "/var/lock/libvirt", but there was something called "/var/lock/libvirt-guests".
Also, "virsh destroy" resulted in an error because the VM was not started.
I was able to "undefine", but how do I define it from the GUI?
I ran "Reconfigure" and it reappeared, but I could not start it.
# Recreate the VM definition if needed using Morpheus GU
I will contact the email address you provided.
**************************************************
25/06/20 13:16:21 virsh # list --all
25/06/20 13:16:23
Id Name State
25/06/20 13:16:23 -----------------------------------------
25/06/20 13:16:23 1 vmcontrol running
25/06/20 13:16:23 45 101-Vyos245.8 running
25/06/20 13:16:23 177 102-Ubuntu22.04 running
25/06/20 13:16:23 179 105-CentOS9 running
25/06/20 13:16:23 180 100-vm-VM245.X running
25/06/20 13:16:23 - 102-Ubuntu22.04-clone shut off
25/06/20 13:16:23 - 103-CentOS9 shut off
25/06/20 13:16:23 - 104-CentOS9 shut off
25/06/20 13:16:23 virsh # start 104-CentOS9
error: Failed to start domain '104-CentOS9'
25/06/20 13:16:31 error: internal error: process exited while connecting to monitor: 2025-06-20T04:16:31.312012Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to get "write" lock
25/06/20 13:16:31 Is another process using the image [/mnt/f7c99439-ab4b-4c3b-88b9-6c6df8926748/104-CentOS9/hvm_18-disk-0]?
root@server2:~# ps aux|grep qemu|grep guest=104
25/06/20 13:18:06
root@server2:/mnt/f7c99439-ab4b-4c3b-88b9-6c6df8926748/104-CentOS9# lsof hvm_18-disk-0
25/06/20 13:22:15
root@server2:/mnt/f7c99439-ab4b-4c3b-88b9-6c6df8926748/104-CentOS9# fuser hvm_18-disk-0
25/06/20 13:22:21
25/06/20 13:23:13
ls: cannot access '/var/lock/libvirt': No such file or directory
25/06/20 13:23:04 root@server2:~# ls -l /var/lock/libvirt-guests←"Available"
25/06/20 13:24:10 virsh # destroy 104-CentOS9
25/06/20 13:24:13 error: Failed to destroy domain '104-CentOS9'
25/06/20 13:24:13 error: Requested operation is not valid: domain is not running
25/06/20 13:36:15 virsh # undefine 104-CentOS9
25/06/20 13:36:45
Domain '104-CentOS9' has been undefined
25/06/20 13:36:45 virsh # list --all
25/06/20 13:36:52
Id Name State
25/06/20 13:36:52 -----------------------------------------
25/06/20 13:36:52 1 vmcontrol running
25/06/20 13:36:52 45 101-Vyos245.8 running
25/06/20 13:36:52 179 105-CentOS9 running
25/06/20 13:36:52 180 100-vm-VM245.X running
25/06/20 13:36:52 689 102-Ubuntu22.04 running
25/06/20 13:36:52 - 102-Ubuntu22.04-clone shut off
25/06/20 13:36:52 - 103-CentOS9 shut off
25/06/20 13:36:52
25/06/20 13:36:52 virsh # list --all
25/06/20 13:40:58
Id Name State
25/06/20 13:40:59 -----------------------------------------
25/06/20 13:40:59 1 vmcontrol running
25/06/20 13:40:59 45 101-Vyos245.8 running
25/06/20 13:40:59 179 105-CentOS9 running
25/06/20 13:40:59 180 100-vm-VM245.X running
25/06/20 13:40:59 689 102-Ubuntu22.04 running
25/06/20 13:40:59 871 104-CentOS9 paused
25/06/20 13:40:59 - 102-Ubuntu22.04-clone shut off
25/06/20 13:40:59 - 103-CentOS9 shut off
25/06/20 13:40:59
25/06/20 13:41:46 virsh # start 104-CentOS9
25/06/20 13:41:49
error: Failed to start domain '104-CentOS9'
25/06/20 13:41:51 error: internal error: process exited while connecting to monitor: 2025-06-20T04:41:51.020857Z qemu-system-x86_64: -blockdev {"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":null}: Failed to get "write" lock
25/06/20 13:41:51 Is another process using the image [/mnt/f7c99439-ab4b-4c3b-88b9-6c6df8926748/104-CentOS9/hvm_18-disk-0]?
25/06/20 13:41:51
**************************************************
his is my testing environment.
*********************
<Environment>
Hosts:
Server1 Ubuntu24.4/VM Essentials 8.0.6
Server2 Ubuntu24.4/VM Essentials 8.0.6
Network:
(Physical/Router/Network)
bond0/Management(Management/Automatically configured when adding a cluster)
bond1/Backend(Manual configuration/NFS Storage Network)
bond2/Backend1g1(Set from Manager)/Backend1g1-100(VLAN100)
bond3/Backend1g2(Set from Manager)/Backend1g2-500(VLAN500)
Storage:
NFS share1
NFS share2
NFS share3
Switch:
VLAN 1 mode access(Management)
VLAN 2 mode access(Backend)
VLAN 100 mode trunk(Backend1g1-100)
VLAN 500 mode trunk(Backend1g3-500)
Instances:
(Name/Host)
DHCP-Server VM/Server1
Linux VM1/Server1
Linux VM2/Server2
Object Storage:
Minio
*********************
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2025 10:04 PM
06-19-2025 10:04 PM
Re: When a host goes down, the VM does not start on another host.
Additional information:
You may be able to start the VM by starting a stopped host.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2025 10:28 PM
06-19-2025 10:28 PM
Re: When a host goes down, the VM does not start on another host.
It's a problem that HA doesn't work even though the manager settings are correct, but it seems like the NFS shared storage is the bottleneck. I've emailed support, so I'd like them to take a look.
I also checked the issue of autostart in another thread.
I was concerned that autostart was disabled on a per-VM basis, even though "Automatically Power On VMs" is enabled in the manager.
However, the thread you mentioned did not recommend enabling autostart on a per-VM basis.
解決済み: Starting and stopping VMs - Hewlett Packard Enterprise Community
*******************
As for the startup behavior, I would not rely on the virsh settings to set an autostart flag. That only really works in a single host cluster. In VME Manager, configure a shared FileSystem datastore (GFS, NFS, CephFS) as a heartbeat LUN. When the host comes back online the HA actions will occur and power VMs back online.
*******************
Heartbeat is enabled on all NFS shares.
>Let me know if you’d like a script to automate autostart for a group of VMs across hosts.
I'd like to see the script.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-23-2025 10:24 PM
06-23-2025 10:24 PM
SolutionAfter exchanging emails with support, the problem was resolved.
I will share the cause and solution of this issue.
The problem in this case was that the NFS storage used as a datastore was mounted in Version 3.
This was because the storage side only supported Version 3.
When I reconfigured the storage that supported Version 4.2 as a datastore from the manager, it was mounted in NFSv4.2.
When I forced the hypervisor host to shut down in this state, the VM automatically started on another host.
(Forced stop by pressing and holding the power button)
Here is a point to note.
If you run "nfsidmap -d" on the hypervisor host, "localdomain" will be returned.
If the V4ID domain on the storage side is not set in the same way, access to the storage on the hypervisor host will be with nobody privileges.
Also, regarding the autostart setting of the manager VM, in my environment it started automatically without any problems even without setting it on the hypervisor host.
After forcibly stopping the hypervisor host, I started the host again, but the virtual port settings remained.
***************
#ovs-vsctl show
Bridge mgmt
fail_mode: standalone
Port vnet2248
Interface vnet2248
error: "could not open network device vnet2248 (No such device)"
Port mgmt
Interface mgmt
type: internal
Port bond0
Interface bond0
***************
You can delete unnecessary settings with the following command.
#ovs-vsctl del-port <bridge> <port>
(ovs-vsctl del-port mgmt vnet2248)