- Community Home
- >
- Software
- >
- HPE Ezmeral Software platform
- >
- Re: Data Fabric Development Environment not starti...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2023 02:17 AM - last edited on 10-12-2023 09:21 PM by support_s
10-11-2023 02:17 AM - last edited on 10-12-2023 09:21 PM by support_s
Hi,
I tried to run the Data Fabric Development Environment on Mac.
Running the Development Environment Script (hpe.com)
Following the steps, the docker container starts. But when I go into the container with ssh rot@localhost -2222 and look in the /opt/mapr/logs/warden.log file it says:
Warden started
In sysVol
head: cannot open '/opt/mapr/conf/mapr-clusters.conf' for reading: No such file or directory
It seems that the mapr-cluster.conf file doesn't exists and it needs it.
Regards
Glenn
Solved! Go to Solution.
- Tags:
- Ezmeral
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-12-2023 12:00 PM
10-12-2023 12:00 PM
Re: Data Fabric Development Environment not starting
I tried reproducing this but the Docker image appears to be working for me.
# wget https://raw.githubusercontent.com/mapr-demos/mapr-db-720-getting-started/main/mapr_devsandbox_container_setup.sh
# chmod +x mapr_devsandbox_container_setup.sh
# ./mapr_devsandbox_container_setup.sh -nwiterface ens192
Wait 10 minutes
# ssh root@localhost -p 2222
root@localhost's password:
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 3.10.0-1160.99.1.el7.x86_64 x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
Last login: Thu Oct 12 18:54:13 2023 from 172.17.0.1
root@maprdemo:~# hadoop fs -ls /
Found 4 items
drwxr-xr-x - mapr mapr 5 2023-10-12 18:50 /apps
drwxrwxrwx - mapr mapr 0 2023-10-12 18:49 /tmp
drwxr-xr-x - mapr mapr 1 2023-10-12 18:48 /user
drwxr-xr-x - mapr mapr 1 2023-10-12 18:51 /var
root@maprdemo:~# cat /opt/mapr/conf/mapr-clusters.conf
maprdemo.mapr.io secure=true 172.17.0.2:7222
root@maprdemo:~# cat /opt/mapr/MapRBuildVersion
7.4.0.0.20230728133744.GA
root@maprdemo:~# jps
21509 FsShell
19381 Drillbit
20902 AdminApplication
26470 Gateway
22390 GetJavaProperty
25323 WardenMain
22653 Jps
14541 QuorumPeerMain
27726 CLDB
1566 DataAccessGatewayApplication
I'm wondering if you were checking the state of the cluster while the configure.sh script was still running? That script is what configures the cluster services, the storage pool, etc.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-13-2023 01:14 AM
10-13-2023 01:14 AM
SolutionI still get this, but I found the cause. In the script mapr_devsandbox_container_setup.sh on line 96 it runs
ERROR: install homebrew - run brew install bash and add export PATH=........
❯ ssh root@localhost -p 2222
root@localhost's password:
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 6.3.13-linuxkit x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
root@maprdemo:~# hadoop fs -ls /
2023-10-13 08:08:30,557 ERROR util.MapRCommonSecurityUtil: Failed to parse mapr-clusters.conf: /opt/mapr/conf/mapr-clusters.conf (No such file or directory)
2023-10-13 08:08:33,224 WARN fs.MapRFileSystem: Could not find any cluster, defaulting to localhost
ls: failure to authenticate to cluster 127.0.0.1:7222
root@maprdemo:~#