HPE Ezmeral Software platform
1821645 Members
3118 Online
109633 Solutions
New Discussion юеВ

HPE Ezmeral Data Fabric (MapR) Installation fail with ipv6 disabled since Core Version 7.2

 
SOLVED
Go to solution
SunnyC
Occasional Advisor

HPE Ezmeral Data Fabric (MapR) Installation fail with ipv6 disabled since Core Version 7.2

Background :

We are planning for a server update from MapR 6.2.0 to 7.7.0. Due to the change from a non-secure cluster to a secured cluster, we are doing fresh installation. We are using RHEL 8.6 and applied hardening based on CIS Red Hat Enterprise Linux 8 Benchmark - Level 1.

Problem:

ipv6 is configured to be disabled during server boot.

image.png

 

 

Installation Failed with installer:

error_1.png

The Error is pointing to mfs failed to start

error_2_disk.pngerror_3_mfs.png

After few testing, we find that this only happen for Core Version 7.2.0 or higher with ipv6 disabled.

I cannot find any kb or release note related to this issue to justify enabling ipv6 after version upgrade.

Is there any way i can complete the installation without enabling ipv6? 

5 REPLIES 5
support_s
System Recommended

Query: HPE Ezmeral Data Fabric (MapR) Installation fail with ipv6 disabled since Core Version 7.2

System recommended content:

1. HPE Ezmeral Data Fabric тАУ Customer-Managed 7.6.1 Documentation | Spark 3.1.2.0 - 2110 (EEP 8.0.0) Release Notes

2. HPE Ezmeral Data Fabric тАУ Customer-Managed 7.6.1 Documentation | https://hpe.to/6605d2IUR

 

Please click on "Thumbs Up/Kudo" icon to give a "Kudo".

 

Thank you for being a HPE valuable community member.


Accept or Kudo

Dave Olker
Neighborhood Moderator

Re: HPE Ezmeral Data Fabric (MapR) Installation fail with ipv6 disabled since Core Version

How exactly are you disabling IPV6 at boot time?  You included a screen shot of grepping for ipv6 in the grub conf file but I don't see any mention of IPv6 in the output?  What options in terms of EDF/EEP components are you selecting during installation?  In other words, is this a simple data fabric cluster or are you attempting to install a bunch of ecosystem components (i.e. Hive, Spark, etc.) during the installation?



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
SunnyC
Occasional Advisor

Re: HPE Ezmeral Data Fabric (MapR) Installation fail with ipv6 disabled since Core Version

@Dave Olker I have updated the screenshot for ipv6. 

I am performing a 4-node cluster installation with installer and I have selected below services during installation:

image.png

Dave Olker
Neighborhood Moderator
Solution

Re: HPE Ezmeral Data Fabric (MapR) Installation fail with ipv6 disabled since Core Version

I was able to reproduce this in my lab.  (Jeez, I sound like a broken record... ).  Once again I'm doing a single-node, EDF 7.7 data fabric only (i.e. no ecosystem components) installation.  With IPv6 disabled in GRUB, the installation fails because the MFS instance won't start and disksetup is trying to reach MFS to format the disks.   This is reproducible even outside of the installer:

 

 

# cat /opt/mapr/conf/disks.txt
/dev/sdb
/dev/sdc
/dev/sdd

# /opt/mapr/server/disksetup -W 3 -F /opt/mapr/conf/disks.txt
Error 3, No such process. Unable to reach mfs. Check for errors in mfs.log.

 

 

 

Looking at the /opt/mapr/logs/mfs.log-3:

 

 

******* Starting mfs server *******
*** mfs mapr-version: $Id: mapr-version: 7.7.0.0.20240422022544.GA fd59aad661155aa3ea errFile: /opt/mapr/logs/mfs.err PROD_BUILD:Yes ***
2024-05-27 09:48:26,8934 INFO Global instancemfs.cc:2356 FS : Using hostname node1, port: 5660, kafkaport: 9092, rdmaport: 5660 hostid 0x296dc2602d496420 (2985255846348940320)
2024-05-27 09:48:26,8934 INFO Global instancemfs.cc:2358 Starting fileserver with pid 79359 on :
2024-05-27 09:48:26,8935 INFO Global instancemfs.cc:2363        [XX.XX.XX.XX]:5660
2024-05-27 09:48:26,8935 INFO Global instancemfs.cc:2365 HighLatencyTraceThresh : 3000 ms
2024-05-27 09:48:26,9007 FATAL Global instancemfs.cc:2372 Listen on port 5660 failed, err -97

 

 

 

/opt/mapr/logs/mfs.err also shows:

 

 

# cat mfs.err
2024-05-27 09:39:23,8828 :2863 socket: error 97
2024-05-27 09:44:39,5602 :2863 socket: error 97
2024-05-27 09:48:26,9006 :2863 socket: error 97

 

 

 

Looks like the error is:

 

$ errno 97
EAFNOSUPPORT 97 Address family not supported by protocol

 

 

I will open an engineering ticket with this information.  Since this issue is reproducible outside of the installer, this may be a blocker until we can get a fix from engineering.  



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Dave Olker
Neighborhood Moderator

Re: HPE Ezmeral Data Fabric (MapR) Installation fail with ipv6 disabled since Core Version

While researching how to disable IPv6 on Red Hat 8 systems, I came across this article: https://access.redhat.com/solutions/8709#rhel789disable.  It explains how to disable IPv6 via the GRUB method you described.  It also explains a second way to globally disable IPv6 using sysctl and dracut.  I implemented that second method of disabling IPv6 and confirmed my network interfaces are not advertizing an IPv6 address:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:b6:0b:05 brd ff:ff:ff:ff:ff:ff
    altname enp11s0
    inet 111.111.111.111/22 brd 111.111.111.255 scope global noprefixroute ens192
       valid_lft forever preferred_lft forever

 

Notice there are no inet6 entries.  I then tried deploying the EDF 7.7 cluster again and it completed successfully:

# maprcli service list -node `hostname`
logpath                                              displayname         name         memallocated  state
/opt/mapr/logs/moss.log                              s3server            s3server     6389.0        2
/opt/mapr/logs/mfs.log                               FileServer          fileserver   22361.0       2
/opt/mapr/grafana/grafana-7.5.10/var/log/grafana     Grafana             grafana      50.0          2
/opt/mapr/logs/cldb.log                              CLDB                cldb         4000.0        2
/opt/mapr/logs/nfsserver.log                         NFS Gateway         nfs          1000.0        2
/opt/mapr/logs/mastgateway.log                       MASTGatewayService  mastgateway  6389.0        2
/opt/mapr/opentsdb/opentsdb-2.4.1/var/log/opentsdb   OpenTsdb            opentsdb                   2
/opt/mapr/logs/hoststats.log                         HostStats           hoststats    Auto          2
/opt/mapr/collectd/collectd-5.12.0/var/log/collectd  CollectD            collectd                   2
/opt/mapr/logs/gateway.log                           GatewayService      gateway      639.0         2
/opt/mapr/apiserver/logs/apiserver.log               APIServer           apiserver    1000.0        2

 

Hopefully you consider this second method of globally disabling IPv6 is an acceptable workaround.



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo