- Community Home
- >
- Software
- >
- HPE Morpheus Software
- >
- HPE Morpheus VM Essentials
- >
- VM Failover in Ceph Cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago - last edited 2 weeks ago by support_s
2 weeks ago - last edited 2 weeks ago by support_s
Hello,
in my 3-node ceph cluster (HVM 1.2 HCI Ceph Cluster on HVM/Ubuntu 24.04, created with ver 8.0.8, patched to 8.0.9) i've created some hvm instances with windows virtual machines (W2022 & W2019). The virtual machines are placed on mvm-volumes and the placement strategy is set to "Auto". The vms have the VirtIO drivers installed and all got an DHCP IP.
For testing i've removed the power of one of the cluster hosts (host3) where one of the vms was running on via ilo "cold boot". I guess there should be a failover for this vm to one of the other hosts (host1 or host2) & automatic restart but it did not work. After host3 is rebooted the vm is remaining in an "power off" state on the same host3.
Am i missing something ?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
2 weeks ago
SolutionDid you enable "Automatic power on VMS" in the Cloud and Cluster settings?
-->This is needed for HA (what you are testing)
Did you configure heartbeating on one of the cluster datastores?
--> This is needed for fast HA.
Without heartbeating, HA will only be triggered on next cluster sync (default every 5 min).
With heartbeating, HA will be triggered instantly (within seconds)
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
2 weeks ago
Re: VM Failover in Ceph Cluster
- automatic power on VMS was already activated
- Datastore Heartbeating was not enabled
I've enabled heartbeating on both of the cluster default "directory pool" datastore's "morpheus-cloud-init" and "morpheus-images". But the outcome is still the same. If i "cold boot" host3 all vm's on this host stay on host3 with status powered off even when host3 finished rebooting.
If i do a manual placement to another host1 or host2, migration works without problems. If i put host3 in maintenance mode all vm's are moved to other hosts.
Is there a log where i can troubleshoot HA issues ?
Thank you !
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
2 weeks ago
Re: VM Failover in Ceph Cluster
hello,
i noticed that when i enable "datastore heartbeating" it will create the folder "mvm-hb" on this datasore. I checked all hosts for this folder in /var/morpheus/kvm/images and this path does not exist on host2. But it exists on host1 + host3. If i put host2 in maintenance mode and retry the failover (cold boot of host3) it works. All vm's are restarted on host1. So i guess the disk is not correctly mounted to host2.