I'm deploying a 3 node cluster for a POC. I have the VME Applaince v8.0.8 installed & VME Manager installed & initilized.
I'm using local install disks for the storage. I will be configuring CEPH for the disks.
I've reviewed Infrastructure > Storage n the following url for info, but do not find any instructions for configuring CEPH:
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006560en_us&page=GUID-4885ECA7-94F5-481E-A2FE-A433FAA899C8.html
Also when creating a cluster select hyperconverged infrastructure (HCI) LAYOUT for CEPH
https://hpevm-docs.morpheusdata.com/en/latest/infrastructure/clusters/mvm.html?highlight=ceph#provisioning-the-cluster
Where can I find on configuring CEPH for each node & the cluster?
for the Cluster type, I'll use the HVM 1.2 HCI Ceph Cluster on HVM/Ubuntu 24.04.
Hello.
This is as far as I know, but just for reference.
1. Example Cluster Deployment
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006560en_us&page=GUID-CDB73B95-18B8-4A10-A09C-4176EE11497D.html#ariaid-title1
→It seems to include information on how to specify "DATA DEVICE."
2. Provisioning the Cluster
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00006560en_us&page=GUID-9886AD4A-5C64-4E09-A106-B14419362757.html#ariaid-title1
→It seems to include an example of an HCI layout.
It says "only one device" here, but "1" lists a way to specify multiple devices, so perhaps the documentation hasn't kept up with the updates. (I guess)
This is a machine translation, so I apologize if it's difficult to understand.
Hello,
Let us know if you were able to resolve the issue.
If you are satisfied with the answers then kindly click the "Accept As Solution" button for the most helpful response so that it is beneficial to all community members.
Please click on "Thumbs Up/Kudo" icon to give a "Kudo".

Hello!
I am trying to deploy a HCI test cluster on 3 Apollo 4200 Gen10 servers. The eno1 1Gbit/s interface (192.168.15.0/24) is used for management and compute, the ens1f0np0 10Gbit/s interface (10.17.0.0/24) is used as a storage network. I keep getting hung up on the stage:
hvm1-Run Script: Initialize OSD / RBD Pools
The system is installed on RAID1 of 2 SSD drives, for ceph I use /dev/sdb (SATA 5TB).
file /etc/hosts:
127.0.0.1 localhost
127.0.1.1 hvm3
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00:: 0 ip6-localnet
ff00:: 0 ip6-mcastprefix
ff02:: 1 ip6-allnodes
ff02:: 2 ip6-allrouters
10.17.0.51 hvm1 hvm1.kpt.local
10.17.0.52 hvm2 hvm2.kpt.local
10.17.0.53 hvm3 hvm3.kpt.local
the names of the servers and their IP addresses from the subnet 192.168.15.0 are registered on the DNS server.
Finally eroor is:
2025-10-09T17:03:05.159+0500 733b72c1b740 0 monclient(hunting): authenticate timed out after 300 rbd: couldn't connect to the cluster! 2025-10-09T17:08:05.183+0500 792a7031f740 0 monclient(hunting): authenticate timed out after 300 rbd: couldn't connect to the cluster! 2025-10-09T16:53:05.034+0500 783aaaaa66c0 0 monclient(hunting): authenticate timed out after 300 [errno 110] RADOS timed out (error connecting to the cluster) 2025-10-09T16:58:05.137+0500 7507509fc6c0 0 monclient(hunting): authenticate timed out after 300 [errno 110] RADOS timed out (error connecting to the cluster)
@ospanbekov I would highly recommend you post this as a new thread. Most aren't looking through old threads to answer new questions.
OK Calvin.
Thanks.