HPE Ezmeral Software platform
1824986 Members
3326 Online
109678 Solutions
New Discussion юеВ

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

 
SOLVED
Go to solution
Deepak_OT
Frequent Visitor

HPE-Ezmeral data fabric Installation stuck at 72%

I am trying to setup HPE- Ezmeral data fabric 7.3 in ON-Prem and AWS cloud with 8 nodes but its stuck at 72% .

 
 

logs says waiting for CLDB to come online.

 

2024-08-19 21:40:15.774: Wait for CLDB(s) to come online(wait_for_cldb.sh {
    "MAPR_HOME": "{{ mapr_home }}",
    "MAPR_USER": "{{ cluster_admin_id }}",
    "TIMEOUT_MAPRCLI": "{{ timeout.standard | int + 60 }}"
}) -> changed:  {
    "attempts": 1,
    "changed": true,
    "mapr_logs": [
        "2024-08-19 21:40:10 IST  INFO Running AnsiballZ_wait_for_cldb.sh 
	2024-08-19 21:40:15 IST  INFO Command: sudo -E -n -u mapr timeout -s HUP 62m /opt/mapr/bin/maprcli node cldbmaster -noheader, Status: 0, Result: ServerID: 2011457914988887104 HostName: node3.hpevolt.com "
    ],
    "msg": "AnsiballZ_wait_for_cldb.sh passed"
}
2024-08-19 21:40:17.037: debug( {
    "msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}) -> ok:  {
    "msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}
2024-08-19 21:40:19.800: finalize_cluster.sh( {
    "MAPR_HOME": "{{ mapr_home }}",
    "MAPR_USER": "{{ cluster_admin_id }}",
    "MAX_WAIT": "300",
    "TIMEOUT_MAPRCLI": "{{timeout.standard}}"
}) -> ok:  {
    "changed": false,
    "mapr_logs": [
        "2024-08-19 21:40:19 IST  INFO Running AnsiballZ_finalize_cluster.sh 
	2024-08-19 21:40:19 IST  INFO MAPR_HOME=/opt/mapr MAPR_USER=mapr "
    ],
    "msg": "Finalize steps are only run on a CLDB node" 

 

 

another log of other node :

2024-08-19 21:40:13.988: Wait for CLDB(s) to come online(wait_for_cldb.sh {
    "MAPR_HOME": "{{ mapr_home }}",
    "MAPR_USER": "{{ cluster_admin_id }}",
    "TIMEOUT_MAPRCLI": "{{ timeout.standard | int + 60 }}"
}) -> changed:  {
    "attempts": 1,
    "changed": true,
    "mapr_logs": [
        "2024-08-19 21:40:10 IST  INFO Running AnsiballZ_wait_for_cldb.sh 
	2024-08-19 21:40:13 IST  INFO Command: sudo -E -n -u mapr timeout -s HUP 62m /opt/mapr/bin/maprcli node cldbmaster -noheader, Status: 0, Result: ServerID: 2011457914988887104 HostName: node3.hpevolt.com "
    ],
    "msg": "AnsiballZ_wait_for_cldb.sh passed"
}
2024-08-19 21:40:16.784: debug( {
    "msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}) -> ok:  {
    "msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}

 

 

I tried to start CLDB service manually but its fails :

 

[mapr@node1 ~]$ sudo systemctl restart mapr-cldb.service
Job for mapr-cldb.service failed because the control process exited with error code.
See "systemctl status mapr-cldb.service" and "journalctl -xe" for details.

 

[mapr@node1 ~]$ sudo systemctl status mapr-cldb.service
тЧП mapr-cldb.service - LSB: Start MapR Control Node services
Loaded: loaded (/etc/rc.d/init.d/mapr-cldb; generated)
Active: failed (Result: exit-code) since Tue 2024-08-20 12:18:11 IST; 37s ago
Docs: man:systemd-sysv-generator(8)
Process: 1496797 ExecStart=/etc/rc.d/init.d/mapr-cldb start (code=exited, status=1/FAILURE)

Aug 20 12:18:10 node1.hpevolt.com systemd[1]: Starting LSB: Start MapR Control Node services...
Aug 20 12:18:11 node1.hpevolt.com mapr-cldb[1496797]: CLDB running as process 388095. Stop it
Aug 20 12:18:11 node1.hpevolt.com systemd[1]: mapr-cldb.service: Control process exited, code=exited status=1
Aug 20 12:18:11 node1.hpevolt.com systemd[1]: mapr-cldb.service: Failed with result 'exit-code'.
Aug 20 12:18:11 node1.hpevolt.com systemd[1]: Failed to start LSB: Start MapR Control Node services.

10 REPLIES 10
Satish_Dhuli
HPE Pro

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Hi

Could you please share the below output

$ ls -l /etc/init.d

Thanks,

HPE Support

I'm an HPE employee.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Deepak_OT
Frequent Visitor

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Hi Satish,

Thank You for your reply.

here is the output of the command :

[mapr@node1 ~]$ ls -l /etc/init.d
lrwxrwxrwx. 1 root root 11 May 15 2023 /etc/init.d -> rc.d/init.d

Satish_Dhuli
HPE Pro

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Hi,

Please follow the below steps:

1. On all nodes except the installer node:
a. Remove all MapR packages:
$ yum remove $(rpm -qa | grep mapr)

b. Remove the MapR directory
$ rm -rf /opt/mapr

2. On the installer node:
a. Remove all MapR packages except those related to the installer:
$ yum remove $(rpm -qa | grep mapr | grep -v installer)

b. Do not remove the /opt/mapr directory.

3. Retry the installtion.

Thanks,

HPE Ezmeral Support.

I'm an HPE employee.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Deepak_OT
Frequent Visitor

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Thank you so much for helping.

now one node failed at 81% with below error :

2024-08-21 12:13:00.653: Task: mapr_conf.py( {
    "data": "{{ mapr.node|to_json }}",
    "is_update": "{{ is_update | default(false) }}",
    "is_upgrade": "{{ is_upgrade | default(false) }}",
    "mapr_home": "{{ mapr_home }}",
    "template_dir": "/tmp/installer/services/templates",
    "timeout": "{{timeout.standard}}"
}) -> failed  =>  {
    "_ansible_no_log": false,
    "changed": false,
    "mapr_logs": "2024-08-21 12:10:55 IST INFO svcs_intersection: ['mapr-apiserver-7.3.0', 'mapr-cldb-7.3.0', 'mapr-gateway-7.3.0', 'mapr-grafana-7.5.10', 'mapr-spark-master-3.3.2', 'mapr-opentsdb-2.4.1', 'mapr-webserver-7.3.0', 'mapr-zookeeper-7.3.0', 'mapr-resourcemanager-3.3.4']\
	2024-08-21 12:10:55 IST INFO all_svcs_set: {'mapr-fileserver-7.3.0', 'mapr-mastgateway-7.3.0', 'mapr-librdkafka-2.6.1', 'mapr-apiserver-7.3.0', 'mapr-asynchbase-1.8.2', 'mapr-cldb-7.3.0', 'mapr-mysql', 'mapr-gateway-7.3.0', 'mapr-historyserver-3.3.4', 'mapr-spark-master-3.3.2', 'mapr-kafka-2.6.1', 'mapr-kafka-schema-registry-6.0.0', 'mapr-opentsdb-2.4.1', 'mapr-webserver-7.3.0', 'mapr-nifi-1.19.1', 'mapr-hive-client-3.1.3', 'mapr-spark-client-3.3.2', 'mapr-resourcemanager-3.3.4', 'mapr-hbase-1.4.14', 'mapr-core-7.3.0', 'mapr-s3server-7.3.0', 'mapr-spark-historyserver-3.3.2', 'mapr-nodemanager-3.3.4', 'mapr-kafka-rest-6.0.0', 'mapr-grafana-7.5.10', 'mapr-spark-thriftserver-3.3.2', 'mapr-hiveserver2-3.1.3', 'mapr-kafka-connect-hdfs-6.0.0', 'mapr-spark-slave-3.3.2', 'mapr-hivewebhcat-3.1.3', 'mapr-collectd-5.12.0', 'mapr-zookeeper-7.3.0', 'mapr-kafka-connect-jdbc-6.0.0', 'mapr-hivemetastore-3.1.3'}\
	2024-08-21 12:10:55 IST INFO host_svcs_set: {'mapr-fileserver-7.3.0', 'mapr-mastgateway-7.3.0', 'mapr-librdkafka-2.6.1', 'mapr-asynchbase-1.8.2', 'mapr-mysql', 'mapr-historyserver-3.3.4', 'mapr-kafka-2.6.1', 'mapr-kafka-schema-registry-6.0.0', 'mapr-nifi-1.19.1', 'mapr-hive-client-3.1.3', 'mapr-spark-client-3.3.2', 'mapr-hbase-1.4.14', 'mapr-core-7.3.0', 'mapr-s3server-7.3.0', 'mapr-spark-historyserver-3.3.2', 'mapr-nodemanager-3.3.4', 'mapr-kafka-rest-6.0.0', 'mapr-spark-thriftserver-3.3.2', 'mapr-hiveserver2-3.1.3', 'mapr-kafka-connect-hdfs-6.0.0', 'mapr-spark-slave-3.3.2', 'mapr-hivewebhcat-3.1.3', 'mapr-collectd-5.12.0', 'mapr-kafka-connect-jdbc-6.0.0', 'mapr-hivemetastore-3.1.3'}\
	2024-08-21 12:10:55 IST WARN These services ['mapr-apiserver-7.3.0', 'mapr-cldb-7.3.0', 'mapr-gateway-7.3.0', 'mapr-grafana-7.5.10', 'mapr-spark-master-3.3.2', 'mapr-opentsdb-2.4.1', 'mapr-webserver-7.3.0', 'mapr-zookeeper-7.3.0', 'mapr-resourcemanager-3.3.4'] are inconsistent between /hosts and /config\
	2024-08-21 12:10:55 IST DEBUG **logline hidden due to sensitive data**\
	2024-08-21 12:10:58 IST DEBUG Command: 'timeout -s HUP 2m hadoop fs -mkdir -p /installer/hive-3.1.3/', Status: '0', Result: '2024-08-21 12:10:57,3888 :1831 peerid 1a8d96c6d16e4740 in binding 7f2d816ed560, conn 7f2d816ed6b0, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:10:58,3908 :1831 peerid 1a8d96c6d16e4740 in binding 7f2d816ed560, conn 7f2d816ed6b0, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692'\
	2024-08-21 12:10:58 IST INFO Putting /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar via hadoop fs -put to hive-3.1.3/\
	2024-08-21 12:10:58 IST DEBUG **logline hidden due to sensitive data**\
	2024-08-21 12:12:58 IST WARN Command 'b'timeout -s HUP 2m hadoop fs -put -f /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar /installer/hive-3.1.3/'' timed out\
	2024-08-21 12:12:58 IST ERROR Command: 'timeout -s HUP 2m hadoop fs -put -f /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar /installer/hive-3.1.3/', Status: '124', Result: '2024-08-21 12:11:00,6292 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:01,6315 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:03,6334 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:06,6355 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:10,6379 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:15,6397 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:21,6414 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:28,6431 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:36,6456 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:45,6479 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:54,6499 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:11:00,6294 ERROR Client fc/client.cc:12922 Thread: 1528656 rpc err Connection reset by peer(104) 28.21 to 172.16.41.222:5692, fid 2049.16.2, upd 0
	2024-08-21 12:12:03,6529 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:12:12,6554 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:12:21,6577 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:12:30,6604 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:12:39,6625 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:12:48,6646 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
	2024-08-21 12:12:57,6664 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692'\
	2024-08-21 12:12:58 IST WARN Command timed out : timeout -s HUP 2m hadoop fs -put -f /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar /installer/hive-3.1.3/"
}
Unable to copy /opt/mapr/hive/hive-3.1.3/lib/hive*.jar to hive-3.1.3/ 'in <string>' requires string as left operand, not bytes, stack: Traceback (most recent call last):
  File "/tmp/ansible_mapr_conf.py_payload_z1k1vfmc/ansible_mapr_conf.py_payload.zip/ansible/modules/mapr_conf/py.py", line 231, in run
  File "/tmp/ansible_mapr_conf.py_payload_z1k1vfmc/ansible_mapr_conf.py_payload.zip/ansible/modules/mapr_conf/py.py", line 152, in copy_from_local
  File "/tmp/ansible_mapr_conf.py_payload_z1k1vfmc/ansible_mapr_conf.py_payload.zip/ansible/modules/mapr_conf/py.py", line 169, in cmd_retry
TypeError: 'in <string>' requires string as left operand, not bytes

 

 

other nodes failed without any error at 95%

Custom Post Eco Configure playbook hook

 

Please suggest some solution for this.

AwezS
HPE Pro

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Below IP is for hypervisior(virt-manager).

192.168.122.1 

You will need to set SUBNET during installation.

ip route should show the subnet for your environment

Thanks,
HPE Ezmeral Support



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Deepak_OT
Frequent Visitor

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Yes, I am using subnet only not this virtual bridge ip.

Satish_Dhuli
HPE Pro

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Hi,

Please follow the below and give it a try.

$ systemctl stop libvirtd.service

If the Bridge network is not needed. please remove it from the node.

$ ip link delete <Interface>

Thanks,

HPE Ezmeral Support

I'm an HPE employee.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
support_s
System Recommended
Solution

Query: HPE-Ezmeral Installation is not proceeding after 72%

Hello,

 

Let us know if you were able to resolve the issue.

 

If you have no further query, and you are satisfied with the answer then kindly mark the topic as Solved so that it is helpful for all community members.

 

Please click on "Thumbs Up/Kudo" icon to give a "Kudo".

 

Thank you for being a HPE valuable community member.


Accept or Kudo

Deepak_OT
Frequent Visitor

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Hi Satish, Thank you so much for helping. Installation is successfull now.

Apiserver NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED CLDB NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Collectd RUNNING_NOT_RESPONDING RUNNING_NOT_RESPONDING RUNNING_NOT_RESPONDING VERIFIED VERIFIED VERIFIED VERIFIED File Server NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED Gateway NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Grafana VERIFIED VERIFIED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED History Server NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Hive Metastore NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Hive Server 2 NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Hive WebHCat NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED RUNNING_NOT_RESPONDING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Apache Kafka Connect NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED VERIFIED Apache Kafka REST API NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED VERIFIED Apache Kafka Schema Registry NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_RUNNING NOT_RUNNING NOT_RUNNING VERIFIED Mastgateway NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NiFi NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED YARN Node Manager NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED VERIFIED OpenTSDB NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED YARN Resource Manager VERIFIED VERIFIED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Objectstore NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED Spark History Server NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_RUNNING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Spark Master NOT_RUNNING NOT_RUNNING NOT_RUNNING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Spark Thrift Server NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_RUNNING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Zookeeper VERIFIED VERIFIED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED

 

But in service verification I am seeing above status, is it expected?

ParvYadav
HPE Pro

Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%

Hi,

Can you share the output of

#maprcli node list -columns svc,csvc

I'm an HPE employee.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo