HPE Morpheus VM Essentials
1837897 Members
3255 Online
110123 Solutions
New Discussion

Re: VME v8.0.8 - Cluster using CEPH with local storage

 
jwantland
Occasional Contributor

VME v8.0.8 - Cluster using CEPH with local storage

I'm deploying a 3 node cluster for a POC. I have the VME Applaince v8.0.8 installed & VME Manager installed & initilized.

I'm using local install disks for the storage. I will be configuring CEPH for the disks.

I've reviewed Infrastructure > Storage n the following url for info, but do not find any instructions for configuring CEPH:
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006560en_us&page=GUID-4885ECA7-94F5-481E-A2FE-A433FAA899C8.html

Also when creating a cluster select hyperconverged infrastructure (HCI) LAYOUT for CEPH
https://hpevm-docs.morpheusdata.com/en/latest/infrastructure/clusters/mvm.html?highlight=ceph#provisioning-the-cluster


Where can I find on configuring CEPH for each node & the cluster?

for the Cluster type, I'll use the HVM 1.2 HCI Ceph Cluster on HVM/Ubuntu 24.04.

4 REPLIES 4
dya
Regular Advisor

Re: VME v8.0.8 - Cluster using CEPH with local storage

Hello.
This is as far as I know, but just for reference.

1. Example Cluster Deployment
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006560en_us&page=GUID-CDB73B95-18B8-4A10-A09C-4176EE11497D.html#ariaid-title1
→It seems to include information on how to specify "DATA DEVICE."

2. Provisioning the Cluster
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006560en_us&page=GUID-9886AD4A-5C64-4E09-A106-B14419362757.html#ariaid-title1
→It seems to include an example of an HCI layout.
It says "only one device" here, but "1" lists a way to specify multiple devices, so perhaps the documentation hasn't kept up with the updates. (I guess)

This is a machine translation, so I apologize if it's difficult to understand.

Arnout_Verbeken
HPE Pro

Re: VME v8.0.8 - Cluster using CEPH with local storage

You do not have to set up the ceph cluster yourself. This will all be done when you create a HVM cluster with the HCI layout.
3 things are required:
A dedicated network interface/bond for storage traffic. It needs to exist on all your hosts and need to have an IP. It can be be some non-routable network. As long as all hosts can ping each other on that storage network.

You need to have at least 3 hosts to create a ceph cluster

All hosts need to have some raw disks, unformatted. All hosts need to have the same number of disks and those disks need to have the same /dev/.... names on all hosts. Check with 'lsblk'.


During HCI cluster setup, you provide those /dev/... names. Comma separated.
You also provide the interface for you storage network.

All the rest will be done during cluster creation.

Once done, check status on hosts with 'ceph status' or similar ceph commands.

Last note: make sure NTP is configured on all hosts upfront.

Docs need an update. We support more than 1 date disk in ceph cluster.


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
support_s
System Recommended

Query: VME v8.0.8 - Cluster using CEPH with local storage

Hello,

 

Let us know if you were able to resolve the issue.

If you are satisfied with the answers then kindly click the "Accept As Solution" button for the most helpful response so that it is beneficial to all community members.

 

 

Please click on "Thumbs Up/Kudo" icon to give a "Kudo".


Accept or Kudo

ospanbekov
Senior Member

Re: Query: VME v8.0.8 - Cluster using CEPH with local storage

Hello!

I am trying to deploy a HCI test cluster on 3 Apollo 4200 Gen10 servers. The eno1 1Gbit/s interface (192.168.15.0/24) is used for management and compute, the ens1f0np0 10Gbit/s interface (10.17.0.0/24) is used as a storage network. I keep getting hung up on the stage:
hvm1-Run Script: Initialize OSD / RBD Pools

The system is installed on RAID1 of 2 SSD drives, for ceph I use /dev/sdb (SATA 5TB).
file /etc/hosts:
127.0.0.1 localhost
127.0.1.1 hvm3

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00:: 0 ip6-localnet
ff00:: 0 ip6-mcastprefix
ff02:: 1 ip6-allnodes
ff02:: 2 ip6-allrouters
10.17.0.51 hvm1 hvm1.kpt.local
10.17.0.52 hvm2 hvm2.kpt.local
10.17.0.53 hvm3 hvm3.kpt.local

the names of the servers and their IP addresses from the subnet 192.168.15.0 are registered on the DNS server.

Finally eroor is:

2025-10-09T17:03:05.159+0500 733b72c1b740 0 monclient(hunting): authenticate timed out after 300 rbd: couldn't connect to the cluster! 2025-10-09T17:08:05.183+0500 792a7031f740 0 monclient(hunting): authenticate timed out after 300 rbd: couldn't connect to the cluster! 2025-10-09T16:53:05.034+0500 783aaaaa66c0 0 monclient(hunting): authenticate timed out after 300 [errno 110] RADOS timed out (error connecting to the cluster) 2025-10-09T16:58:05.137+0500 7507509fc6c0 0 monclient(hunting): authenticate timed out after 300 [errno 110] RADOS timed out (error connecting to the cluster)