HPE Morpheus VM Essentials
1822469 Members
2521 Online
109642 Solutions
New Discussion

Re: Unable to Create Cluster on the HPE VM Essentials

 
AbhijitPatil
Visitor

Unable to Create Cluster on the HPE VM Essentials

Dear HPE Team,


I would like to inform you that we have successfully deployed the HPE VME Manager as per the required installation steps. Below is a detailed summary of the actions taken during the deployment:


1. Hardware Requirements

  • 3 x HPE DL380 Gen10 Plus servers

  • Each server includes:

    • 2 x NICs for Management & Compute (bonded using LACP)

    • 2 x NICs for iSCSI Storage

    • Adequate CPU, RAM, and SSD/NVMe storage


2. Operating System Installation

  • Installed Ubuntu 24.04.2 LTS
    Live Server
    on all 3 nodes

  • Selected HWE-generic kernel during installation

  • Configured unique hostnames and static IPs for each node

  • Added entries to /etc/hosts and confirmed hostname resolution


3. Network Configuration

  • Created LACP bond0 using the 2 mgmt/compute NICs

  • Tagged VLAN 22 on bond interface (bond0.22)

  • Left iSCSI NICs untouched during OS setup


4. HPE VME Essentials Installation

  • Copied HPE_VM_Essentials_SW_image_8.0.5_1_S5Q83-11009.iso to one node

  • Mounted ISO to /mnt and copied contents to /VME

  • Set proper permissions:

  • chmod 777 /VME/hpe-vm*
    gunzip *.qcow2.gz
    sudo apt install ./<package>.deb -f
    • Ran sudo hpe-vm for initial setup

    • Repeated the same .deb installation on the other 2 nodes


    5. Network Configuration via hpe-vme Tool

    • Configured:

      • bond0.22 for Management VLAN

      • Two Storage NICs with separate subnets and MTU 9000

    • Saved network config which generated /etc/netplan/60-mvm.yaml


    6. Morpheus Appliance Installation

    • Launched hpe-vm on the first node and selected Install Morpheus

    • Provided the following configuration:

      • Appliance IP: 172.25.1.190

      • Subnet: 255.255.254.0

      • Gateway: 172.25.1.254

      • DNS: 8.8.8.8

      • URL: https://hpevmemanager.poc.local

      • Hostname: hpevmemanager

      • Admin Credentials: username / Password@1234

      • Image Path: file:///VME/<filename.qcow2>

      • VM Size: Large

      • Management Interface: bond0.22

      • Compute Interface: bond0

      • VLAN: 25


        I followed the documented steps to deploy the HPE VME Manager and initiate cluster creation. However, I encountered an issue during the Pacemaker configuration stage in Cluster Provisioning.

        The following errors were observed:

        Error: unable to get crm_config
        Could not connect to the CIB: Transport endpoint is not connected
        Init failed, could not perform requested operations

        Error: error running crm_mon, is pacemaker running?
        ...
        Synchronizing state of dlm.service with SysV service script with /lib/systemd/systemd-sysv-install.
        Executing: /lib/systemd/systemd-sysv-install disable dlm
        Error: Unable to communicate with vme2
        Error: Unable to communicate with vme3
        Password: Error: Operation timed out
        vme1: Authorized

        Troubleshooting Performed:

        • I verified that all three servers can successfully SSH into each other.

        • However, SSH connections hang when initiated from one cluster node to another after starting the cluster creation.

        • Interestingly, SSH access works fine when connecting from an external machine (e.g., my laptop).

        • This issue only occurs after cluster provisioning begins. Prior to that, all inter-node communication works as expected.

        Could you please help analyze the issue and suggest further steps?

        Best regards,
        Abhijit Patil

5 REPLIES 5
DiegoDelgado
HPE Pro

Re: Unable to Create Cluster on the HPE VM Essentials

Hello Abhijit,

I see you entered a public DNS in the manager configuration so I infer that you don't have an internal DNS service where name resolution can be configured. That being said, you can still install VM Essentials, just keep the IP in the appliance URL field (https://<your ip>), this is needed not only because it will be the address you'll point your browser to on deployment, but it will also be the IP the hosts use to communicate with the manager.

On the other hand, .local domains can cause problems in linux, some distros solve them with workarounds, but the VME manager doesn't have any workarounds for it. You could run the following in your manager before creating the cluster to solve this part:

Steps to solve (run as root):
1. Remove original symlink: rm -f /etc/resolv.conf
2. Create new symmlink: ln -s /run/system/resolve/resolv.conf
/etc/resolv.conf
3. Check if the linking worked: ls -l /etc/resolv.conf
a. You should see something similar to: lrwxrwxrwx 1 root root 31
Feb 13 08:15 /etc/resolv.conf -> /run/system/resolve/resolv.conf
Note: Do you run random commands as root as pointed out by a random



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
AbhijitPatil
Visitor

Re: Unable to Create Cluster on the HPE VM Essentials

Hello Diego,

Thanks for your reply.

I’ve successfully installed the VME Manager using the appliance URL:
https://172.25.1.190

I also completed the DNS configuration part. I noticed a small typo in the earlier suggestion — the correct path should be /run/systemd/resolve/resolv.conf (not /run/system/resolve/resolv.conf). Here's what I executed:

rm -f /etc/resolv.conf
ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

 

Current symlink looks like this:

lrwxrwxrwx 1 root root 32 May 13 11:32 /etc/resolv.conf -> /run/systemd/resolve/resolv.conf

The resolvectl status shows the DNS is set to 8.8.8.8 on the correct interface (vlan22).

One small clarification:
Do I still need to add all 3 nodes' IP and hostname entries into the /etc/hosts file on each node? For example:

172.22.1.181 vme1
172.22.1.182 vme2
172.22.1.183 vme3

Please confirm if this step is required or recommended for proper name resolution during the cluster setup.

Thanks again for your support!

Best regards,
Abhijeet Patil

DiegoDelgado
HPE Pro

Re: Unable to Create Cluster on the HPE VM Essentials

Glad to know you got it working.

As for DNS resolution, it's not a strict requirement so you can skip it, but for completeness, you could add the hosts' and the manager's hostnames and FQDNs to the hosts files in all the components



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
AbhijitPatil
Visitor

Re: Unable to Create Cluster on the HPE VM Essentials

Server-vme1 - Run Script: Pacemaker Configuration

Warning: Some nodes are missing names in corosync.conf, those nodes were omitted. Edit corosync.conf and make sure all nodes have their name set.
Synchronizing state of dlm.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install disable dlm
Removed "/etc/systemd/system/multi-user.target.wants/dlm.service".
Error: Operation timed out
Server-vme1: Authorized
Error: Unable to communicate with Server-vme2
Error: Unable to communicate with Server-vme3
Password:

Still Showing Same issuse While Creating Cluster
Kindly Suggest

AbhijitPatil
Visitor

Re: Unable to Create Cluster on the HPE VM Essentials

root@vme1:~# ssh -vvv root@vme2
OpenSSH_9.6p1 Ubuntu-3ubuntu13.11, OpenSSL 3.0.13 30 Jan 2024
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/root/.ssh/known_hosts'
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/root/.ssh/known_hosts2'
debug2: resolving "vme2" port 22
debug3: resolve_host: lookup vme2:22
debug3: channel_clear_timeouts: clearing
debug3: ssh_connect_direct: entering
debug1: Connecting to vme2 [172.22.1.182] port 22.
debug3: set_sock_tos: set socket 3 IP_TOS 0x10
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type 3
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa_sk type -1
debug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_ed25519_sk type -1
debug1: identity file /root/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_9.6p1 Ubuntu-3ubuntu13.11
debug1: Remote protocol version 2.0, remote software version OpenSSH_9.6p1 Ubuntu-3ubuntu13.11
debug1: compat_banner: match: OpenSSH_9.6p1 Ubuntu-3ubuntu13.11 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to vme2:22 as 'root'
debug1: load_hostkeys: fopen /root/.ssh/known_hosts2: No such file or directory
debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory
debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory
debug3: order_hostkeyalgs: no algorithms matched; accept original
debug3: send packet: type 20
debug1: SSH2_MSG_KEXINIT sent
debug3: receive packet: type 20
debug1: SSH2_MSG_KEXINIT received
debug2: local client KEXINIT proposal
debug2: KEX algorithms: sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c,kex-strict-c-v00@openssh.com
debug2: host key algorithms: ssh-ed25519-cert-v01@openssh.com,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,sk-ecdsa-sha2-nistp256-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ssh-ed25519@openssh.com,sk-ecdsa-sha2-nistp256@openssh.com,rsa-sha2-512,rsa-sha2-256
debug2: ciphers ctos: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
debug2: ciphers stoc: aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,zlib@openssh.com,zlib
debug2: compression stoc: none,zlib@openssh.com,zlib
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug2: peer server KEXINIT proposal
debug2: KEX algorithms: sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-s,kex-strict-s-v00@openssh.com
debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519
debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,zlib@openssh.com
debug2: compression stoc: none,zlib@openssh.com
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug3: kex_choose_conf: will use strict KEX ordering
debug1: kex: algorithm: sntrup761x25519-sha512@openssh.com
debug1: kex: host key algorithm: ssh-ed25519
debug1: kex: server->client cipher: aes128-ctr MAC: umac-64-etm@openssh.com compression: none
debug1: kex: client->server cipher: aes128-ctr MAC: umac-64-etm@openssh.com compression: none
debug3: send packet: type 30
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

root@vme2:~# systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/usr/lib/systemd/system/ssh.service; disabled; preset: enabled)
Active: active (running) since Wed 2025-05-14 07:40:43 UTC; 14min ago
TriggeredBy: ● ssh.socket
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 5800 (sshd)
Tasks: 3 (limit: 308651)
Memory: 3.1M (peak: 3.8M)
CPU: 53ms
CGroup: /system.slice/ssh.service
├─ 5800 "sshd: /usr/sbin/sshd -D [listener] 1 of 10-100 startups"
├─10028 "sshd: [accepted]"
└─10029 "sshd: [net]"

May 14 07:40:43 vme2 systemd[1]: Starting ssh.service - OpenBSD Secure Shell server...
May 14 07:40:43 vme2 sshd[5800]: Server listening on :: port 22.
May 14 07:40:43 vme2 systemd[1]: Started ssh.service - OpenBSD Secure Shell server.
May 14 07:46:23 vme2 sshd[8143]: Connection closed by 172.22.1.181 port 32856 [preauth]

This preauth error showing when i try to connect the ssh session

because of this i faced that error in provision