- Community Home
- >
- Software
- >
- HPE Ezmeral Software platform
- >
- Re: HPE-Ezmeral data fabric Installation is not pr...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-19-2024 11:51 PM - edited тАО08-20-2024 10:23 PM
тАО08-19-2024 11:51 PM - edited тАО08-20-2024 10:23 PM
I am trying to setup HPE- Ezmeral data fabric 7.3 in ON-Prem and AWS cloud with 8 nodes but its stuck at 72% .
logs says waiting for CLDB to come online.
2024-08-19 21:40:15.774: Wait for CLDB(s) to come online(wait_for_cldb.sh {
"MAPR_HOME": "{{ mapr_home }}",
"MAPR_USER": "{{ cluster_admin_id }}",
"TIMEOUT_MAPRCLI": "{{ timeout.standard | int + 60 }}"
}) -> changed: {
"attempts": 1,
"changed": true,
"mapr_logs": [
"2024-08-19 21:40:10 IST INFO Running AnsiballZ_wait_for_cldb.sh
2024-08-19 21:40:15 IST INFO Command: sudo -E -n -u mapr timeout -s HUP 62m /opt/mapr/bin/maprcli node cldbmaster -noheader, Status: 0, Result: ServerID: 2011457914988887104 HostName: node3.hpevolt.com "
],
"msg": "AnsiballZ_wait_for_cldb.sh passed"
}
2024-08-19 21:40:17.037: debug( {
"msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}) -> ok: {
"msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}
2024-08-19 21:40:19.800: finalize_cluster.sh( {
"MAPR_HOME": "{{ mapr_home }}",
"MAPR_USER": "{{ cluster_admin_id }}",
"MAX_WAIT": "300",
"TIMEOUT_MAPRCLI": "{{timeout.standard}}"
}) -> ok: {
"changed": false,
"mapr_logs": [
"2024-08-19 21:40:19 IST INFO Running AnsiballZ_finalize_cluster.sh
2024-08-19 21:40:19 IST INFO MAPR_HOME=/opt/mapr MAPR_USER=mapr "
],
"msg": "Finalize steps are only run on a CLDB node"
another log of other node :
2024-08-19 21:40:13.988: Wait for CLDB(s) to come online(wait_for_cldb.sh {
"MAPR_HOME": "{{ mapr_home }}",
"MAPR_USER": "{{ cluster_admin_id }}",
"TIMEOUT_MAPRCLI": "{{ timeout.standard | int + 60 }}"
}) -> changed: {
"attempts": 1,
"changed": true,
"mapr_logs": [
"2024-08-19 21:40:10 IST INFO Running AnsiballZ_wait_for_cldb.sh
2024-08-19 21:40:13 IST INFO Command: sudo -E -n -u mapr timeout -s HUP 62m /opt/mapr/bin/maprcli node cldbmaster -noheader, Status: 0, Result: ServerID: 2011457914988887104 HostName: node3.hpevolt.com "
],
"msg": "AnsiballZ_wait_for_cldb.sh passed"
}
2024-08-19 21:40:16.784: debug( {
"msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}) -> ok: {
"msg": "CLDB service will come on-line after Zookeeper quorum is achieved which requires the other control nodes to be installed. Please proceed with installation on remaining control nodes"
}
I tried to start CLDB service manually but its fails :
[mapr@node1 ~]$ sudo systemctl restart mapr-cldb.service
Job for mapr-cldb.service failed because the control process exited with error code.
See "systemctl status mapr-cldb.service" and "journalctl -xe" for details.
[mapr@node1 ~]$ sudo systemctl status mapr-cldb.service
тЧП mapr-cldb.service - LSB: Start MapR Control Node services
Loaded: loaded (/etc/rc.d/init.d/mapr-cldb; generated)
Active: failed (Result: exit-code) since Tue 2024-08-20 12:18:11 IST; 37s ago
Docs: man:systemd-sysv-generator(8)
Process: 1496797 ExecStart=/etc/rc.d/init.d/mapr-cldb start (code=exited, status=1/FAILURE)
Aug 20 12:18:10 node1.hpevolt.com systemd[1]: Starting LSB: Start MapR Control Node services...
Aug 20 12:18:11 node1.hpevolt.com mapr-cldb[1496797]: CLDB running as process 388095. Stop it
Aug 20 12:18:11 node1.hpevolt.com systemd[1]: mapr-cldb.service: Control process exited, code=exited status=1
Aug 20 12:18:11 node1.hpevolt.com systemd[1]: mapr-cldb.service: Failed with result 'exit-code'.
Aug 20 12:18:11 node1.hpevolt.com systemd[1]: Failed to start LSB: Start MapR Control Node services.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-19-2024 11:58 PM
тАО08-19-2024 11:58 PM
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Hi
Could you please share the below output
$ ls -l /etc/init.d
Thanks,
HPE Support
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2024 01:13 AM - edited тАО08-20-2024 01:34 AM
тАО08-20-2024 01:13 AM - edited тАО08-20-2024 01:34 AM
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Hi Satish,
Thank You for your reply.
here is the output of the command :
[mapr@node1 ~]$ ls -l /etc/init.d
lrwxrwxrwx. 1 root root 11 May 15 2023 /etc/init.d -> rc.d/init.d
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2024 01:40 AM - last edited on тАО09-16-2024 02:13 AM by support_s
тАО08-20-2024 01:40 AM - last edited on тАО09-16-2024 02:13 AM by support_s
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Hi,
Please follow the below steps:
1. On all nodes except the installer node:
a. Remove all MapR packages:
$ yum remove $(rpm -qa | grep mapr)
b. Remove the MapR directory
$ rm -rf /opt/mapr
2. On the installer node:
a. Remove all MapR packages except those related to the installer:
$ yum remove $(rpm -qa | grep mapr | grep -v installer)
b. Do not remove the /opt/mapr directory.
3. Retry the installtion.
Thanks,
HPE Ezmeral Support.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Tags:
- storage controller
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-21-2024 01:21 AM - last edited on тАО09-16-2024 02:13 AM by support_s
тАО08-21-2024 01:21 AM - last edited on тАО09-16-2024 02:13 AM by support_s
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Thank you so much for helping.
now one node failed at 81% with below error :
2024-08-21 12:13:00.653: Task: mapr_conf.py( {
"data": "{{ mapr.node|to_json }}",
"is_update": "{{ is_update | default(false) }}",
"is_upgrade": "{{ is_upgrade | default(false) }}",
"mapr_home": "{{ mapr_home }}",
"template_dir": "/tmp/installer/services/templates",
"timeout": "{{timeout.standard}}"
}) -> failed => {
"_ansible_no_log": false,
"changed": false,
"mapr_logs": "2024-08-21 12:10:55 IST INFO svcs_intersection: ['mapr-apiserver-7.3.0', 'mapr-cldb-7.3.0', 'mapr-gateway-7.3.0', 'mapr-grafana-7.5.10', 'mapr-spark-master-3.3.2', 'mapr-opentsdb-2.4.1', 'mapr-webserver-7.3.0', 'mapr-zookeeper-7.3.0', 'mapr-resourcemanager-3.3.4']\
2024-08-21 12:10:55 IST INFO all_svcs_set: {'mapr-fileserver-7.3.0', 'mapr-mastgateway-7.3.0', 'mapr-librdkafka-2.6.1', 'mapr-apiserver-7.3.0', 'mapr-asynchbase-1.8.2', 'mapr-cldb-7.3.0', 'mapr-mysql', 'mapr-gateway-7.3.0', 'mapr-historyserver-3.3.4', 'mapr-spark-master-3.3.2', 'mapr-kafka-2.6.1', 'mapr-kafka-schema-registry-6.0.0', 'mapr-opentsdb-2.4.1', 'mapr-webserver-7.3.0', 'mapr-nifi-1.19.1', 'mapr-hive-client-3.1.3', 'mapr-spark-client-3.3.2', 'mapr-resourcemanager-3.3.4', 'mapr-hbase-1.4.14', 'mapr-core-7.3.0', 'mapr-s3server-7.3.0', 'mapr-spark-historyserver-3.3.2', 'mapr-nodemanager-3.3.4', 'mapr-kafka-rest-6.0.0', 'mapr-grafana-7.5.10', 'mapr-spark-thriftserver-3.3.2', 'mapr-hiveserver2-3.1.3', 'mapr-kafka-connect-hdfs-6.0.0', 'mapr-spark-slave-3.3.2', 'mapr-hivewebhcat-3.1.3', 'mapr-collectd-5.12.0', 'mapr-zookeeper-7.3.0', 'mapr-kafka-connect-jdbc-6.0.0', 'mapr-hivemetastore-3.1.3'}\
2024-08-21 12:10:55 IST INFO host_svcs_set: {'mapr-fileserver-7.3.0', 'mapr-mastgateway-7.3.0', 'mapr-librdkafka-2.6.1', 'mapr-asynchbase-1.8.2', 'mapr-mysql', 'mapr-historyserver-3.3.4', 'mapr-kafka-2.6.1', 'mapr-kafka-schema-registry-6.0.0', 'mapr-nifi-1.19.1', 'mapr-hive-client-3.1.3', 'mapr-spark-client-3.3.2', 'mapr-hbase-1.4.14', 'mapr-core-7.3.0', 'mapr-s3server-7.3.0', 'mapr-spark-historyserver-3.3.2', 'mapr-nodemanager-3.3.4', 'mapr-kafka-rest-6.0.0', 'mapr-spark-thriftserver-3.3.2', 'mapr-hiveserver2-3.1.3', 'mapr-kafka-connect-hdfs-6.0.0', 'mapr-spark-slave-3.3.2', 'mapr-hivewebhcat-3.1.3', 'mapr-collectd-5.12.0', 'mapr-kafka-connect-jdbc-6.0.0', 'mapr-hivemetastore-3.1.3'}\
2024-08-21 12:10:55 IST WARN These services ['mapr-apiserver-7.3.0', 'mapr-cldb-7.3.0', 'mapr-gateway-7.3.0', 'mapr-grafana-7.5.10', 'mapr-spark-master-3.3.2', 'mapr-opentsdb-2.4.1', 'mapr-webserver-7.3.0', 'mapr-zookeeper-7.3.0', 'mapr-resourcemanager-3.3.4'] are inconsistent between /hosts and /config\
2024-08-21 12:10:55 IST DEBUG **logline hidden due to sensitive data**\
2024-08-21 12:10:58 IST DEBUG Command: 'timeout -s HUP 2m hadoop fs -mkdir -p /installer/hive-3.1.3/', Status: '0', Result: '2024-08-21 12:10:57,3888 :1831 peerid 1a8d96c6d16e4740 in binding 7f2d816ed560, conn 7f2d816ed6b0, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:10:58,3908 :1831 peerid 1a8d96c6d16e4740 in binding 7f2d816ed560, conn 7f2d816ed6b0, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692'\
2024-08-21 12:10:58 IST INFO Putting /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar via hadoop fs -put to hive-3.1.3/\
2024-08-21 12:10:58 IST DEBUG **logline hidden due to sensitive data**\
2024-08-21 12:12:58 IST WARN Command 'b'timeout -s HUP 2m hadoop fs -put -f /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar /installer/hive-3.1.3/'' timed out\
2024-08-21 12:12:58 IST ERROR Command: 'timeout -s HUP 2m hadoop fs -put -f /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar /installer/hive-3.1.3/', Status: '124', Result: '2024-08-21 12:11:00,6292 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:01,6315 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:03,6334 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:06,6355 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:10,6379 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:15,6397 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:21,6414 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:28,6431 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:36,6456 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:45,6479 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:54,6499 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:11:00,6294 ERROR Client fc/client.cc:12922 Thread: 1528656 rpc err Connection reset by peer(104) 28.21 to 172.16.41.222:5692, fid 2049.16.2, upd 0
2024-08-21 12:12:03,6529 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:12:12,6554 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:12:21,6577 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:12:30,6604 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:12:39,6625 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:12:48,6646 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692
2024-08-21 12:12:57,6664 :1831 peerid 1a8d96c6d16e4740 in binding 7fc00d8352d0, conn 7fc00d835420, ip 192.168.122.1:5692 doesn't match peerid 6207044c2f5c0960 in rpc hdr from 192.168.122.1:5692'\
2024-08-21 12:12:58 IST WARN Command timed out : timeout -s HUP 2m hadoop fs -put -f /opt/mapr/hive/hive-3.1.3/lib/hive-accumulo-handler-3.1.3.200-eep-911.jar /installer/hive-3.1.3/"
}
Unable to copy /opt/mapr/hive/hive-3.1.3/lib/hive*.jar to hive-3.1.3/ 'in <string>' requires string as left operand, not bytes, stack: Traceback (most recent call last):
File "/tmp/ansible_mapr_conf.py_payload_z1k1vfmc/ansible_mapr_conf.py_payload.zip/ansible/modules/mapr_conf/py.py", line 231, in run
File "/tmp/ansible_mapr_conf.py_payload_z1k1vfmc/ansible_mapr_conf.py_payload.zip/ansible/modules/mapr_conf/py.py", line 152, in copy_from_local
File "/tmp/ansible_mapr_conf.py_payload_z1k1vfmc/ansible_mapr_conf.py_payload.zip/ansible/modules/mapr_conf/py.py", line 169, in cmd_retry
TypeError: 'in <string>' requires string as left operand, not bytes
other nodes failed without any error at 95%
Custom Post Eco Configure playbook hook
Please suggest some solution for this.
- Tags:
- storage controller
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-21-2024 02:26 AM
тАО08-21-2024 02:26 AM
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Below IP is for hypervisior(virt-manager).
192.168.122.1
You will need to set SUBNET during installation.
ip route should show the subnet for your environment
Thanks,
HPE Ezmeral Support
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-21-2024 09:40 AM
тАО08-21-2024 09:40 AM
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Yes, I am using subnet only not this virtual bridge ip.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-21-2024 10:09 PM - last edited on тАО09-16-2024 02:13 AM by support_s
тАО08-21-2024 10:09 PM - last edited on тАО09-16-2024 02:13 AM by support_s
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Hi,
Please follow the below and give it a try.
$ systemctl stop libvirtd.service
If the Bridge network is not needed. please remove it from the node.
$ ip link delete <Interface>
Thanks,
HPE Ezmeral Support
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Tags:
- storage controller
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-23-2024 05:27 AM
тАО08-23-2024 05:27 AM
SolutionHello,
Let us know if you were able to resolve the issue.
If you have no further query, and you are satisfied with the answer then kindly mark the topic as Solved so that it is helpful for all community members.
Please click on "Thumbs Up/Kudo" icon to give a "Kudo".
Thank you for being a HPE valuable community member.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-27-2024 03:13 AM - last edited on тАО09-16-2024 02:13 AM by support_s
тАО08-27-2024 03:13 AM - last edited on тАО09-16-2024 02:13 AM by support_s
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Hi Satish, Thank you so much for helping. Installation is successfull now.
Apiserver NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED CLDB NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Collectd RUNNING_NOT_RESPONDING RUNNING_NOT_RESPONDING RUNNING_NOT_RESPONDING VERIFIED VERIFIED VERIFIED VERIFIED File Server NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED Gateway NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Grafana VERIFIED VERIFIED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED History Server NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Hive Metastore NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Hive Server 2 NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_IMPLEMENTED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Hive WebHCat NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED RUNNING_NOT_RESPONDING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Apache Kafka Connect NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED VERIFIED Apache Kafka REST API NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED VERIFIED Apache Kafka Schema Registry NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_RUNNING NOT_RUNNING NOT_RUNNING VERIFIED Mastgateway NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NiFi NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED YARN Node Manager NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED VERIFIED OpenTSDB NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED VERIFIED VERIFIED VERIFIED YARN Resource Manager VERIFIED VERIFIED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Objectstore NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED NOT_IMPLEMENTED Spark History Server NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_RUNNING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Spark Master NOT_RUNNING NOT_RUNNING NOT_RUNNING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Spark Thrift Server NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_RUNNING NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED Zookeeper VERIFIED VERIFIED VERIFIED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED NOT_INSTALLED
But in service verification I am seeing above status, is it expected?
- Tags:
- storage controller
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-28-2024 08:42 AM - last edited on тАО09-16-2024 02:13 AM by support_s
тАО08-28-2024 08:42 AM - last edited on тАО09-16-2024 02:13 AM by support_s
Re: HPE-Ezmeral data fabric Installation is not proceeding at 72%
Hi,
Can you share the output of
#maprcli node list -columns svc,csvc
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Tags:
- storage controller