HPE Ezmeral Software platform
1833184 Members
2814 Online
110051 Solutions
New Discussion

Re: CLDB is not starting in two nodes in three node cluster.

 
anilkumar9
Occasional Contributor

CLDB is not starting in two nodes in three node cluster.

I have installed EDF 7..5 in three node clusters. Observed that its not starting two nodes and following is errors .

 

2023-11-21 08:50:37,980 INFO  LicenseManager [Becoming Slave Thread]: old features list:[]

2023-11-21 08:50:37,981 INFO  LicenseManager [Becoming Slave Thread]: updated features list:[NFS, MAXNODES, MAPR_TABLES, MAPR_TABLES_FULL, NFS_CLIENT_BASE, POSIX_CLIENT_GOLD, MAPR_STREAMS]

2023-11-21 08:50:38,007 INFO  LicenseManager [Becoming Slave Thread]: 0x16: unique id: 1402ec05e8d8-30393137-3436-4753-4836-313358565035-0000000516-0469762560-0469762560

2023-11-21 08:50:38,007 FATAL BecomeSlaveThread [Becoming Slave Thread]: license not found for CLDB HA: shutting down

2023-11-21 08:50:38,007 FATAL CLDB [Becoming Slave Thread]: CLDBShutdown: license not found for CLDB HA: shutting down

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: Thread: Thread-35 ID: 82

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: Thread: main-EventThread ID: 31

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/jdk.internal.misc.Unsafe.park(Native Method)

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)

ractQueuedSynchronizer.java:2081)

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:520)

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: Thread: qtp1689458432-58 ID: 58

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/sun.nio.ch.EPoll.wait(Native Method)

2023-11-21 08:50:38,013 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.io.ManagedSelector$$Lambda$232/0x0000000840468c40.run(Unknown Source)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.lang.Thread.run(Thread.java:834)

2023-11-21 08:50:38,014 ERROR CLDB [Becoming Slave Thread]: Thread: Thread-13 ID: 44

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: Thread: Thread-52 ID: 99

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: Thread: cAdmin-2 ID: 69

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/jdk.internal.misc.Unsafe.park(Native Method)

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)

2023-11-21 08:50:38,015 ERROR CLDB [Becoming Slave Thread]: java.base@11.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor

________________________________________________________________________________

 

Please let me know how to fix this issue.

 

Thanks

Anil

4 REPLIES 4
Skc_Grd
Neighborhood Moderator

Re: CLDB is not starting in two nodes in three node cluster.

Hello,

Could you please share below command output.

#maprcli license list -json 
#maprcli license showid 

Thanks,



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Skc_Grd
Neighborhood Moderator

Re: CLDB is not starting in two nodes in three node cluster.

Hello,

Its because you do not have valid licence. For CLDB HA you need licence.

2023-11-21 08:50:38,007 FATAL BecomeSlaveThread [Becoming Slave Thread]: license not found for CLDB HA: shutting down

Thanks,



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
anilkumar9
Occasional Contributor

Re: CLDB is not starting in two nodes in three node cluster.

$ maprcli license list -json
{
"timestamp":1700544390823,
"timeofday":"2023-11-21 10:56:30.823 GMT+0530 AM",
"status":"OK",
"total":3,
"data":[
{
"id":"7gVDn6lT9ao3RYgdX5mPGt5inTJQsMaM+rhWkVfFVO4=",
"description":"HPE Ezmeral Data Fabric Community",
"issue":"Nov 21, 2023",
"usageBasedLicense":"false",
"maxnodes":"9999",
"isAdditionalFeature":false,
"deletable":true,
"grace":true,
"license":"clusterid: \"5992638758152584922\"\nversion: \"4.0\"\ncustomerid: \"YHAA5DGYYH7Y\"\nissuer: \"Hewlett Packard Enterprise (HPE)\"\nlicType: Registered\ndescription: \"HPE Ezmeral Data Fabric Community\"\nenforcement: HARD\ngracePeriod: 0\nissuedate: 1700536389\nissuedateStr: \"Tue Nov 21 03:13:09 GMT 2023\"\ncapabilities {\n feature: NFS\n name: \"NFS\"\n permission: ALLOW\n featureData {\n maxNfsNodes: \"1\"\n }\n}\ncapabilities {\n feature: MAXNODES\n name: \"Max Nodes in Cluster\"\n permission: ALLOW\n featureData {\n maxNodes: \"9999\"\n }\n}\ncapabilities {\n feature: MAPR_TABLES\n name: \"MapR Tables\"\n permission: ALLOW\n}\ncapabilities {\n feature: MAPR_STREAMS\n name: \"MapR Streams\"\n permission: ALLOW\n}\nhash: \"7gVDn6lT9ao3RYgdX5mPGt5inTJQsMaM+rhWkVfFVO4=\"\n"
},
{
"id":"6GvPuQpoUF5p0YrM8/Y1GPqc0U+Iupi4Hq2w3L422f0=",
"description":"Base MapR POSIX Client for fast secure file access",
"usageBasedLicense":"false",
"posixnodes":"10",
"isAdditionalFeature":true,
"deletable":false,
"grace":true,
"license":"version: \"4.0\"\ncustomerid: \"BaseLicenseUser\"\nissuer: \"MapR Technologies,
Inc.\"\nlicType: AdditionalFeaturesBase\ndescription: \"Base MapR POSIX Client for fast secure file access\"\nenforcement: HARD\ncapabilities {\n feature: NFS_CLIENT_BASE\n name: \"MapR POSIX CLIENT\"\n permission: ALLOW\n featureData {\n maxNfsClientNodes: \"10\"\n }\n}\nhash: \"6GvPuQpoUF5p0YrM8/Y1GPqc0U+Iupi4Hq2w3L422f0=\"\n"
},
{
"id":"qJq+jZfykO+oPLkNlb33oE4NiZbgurgoQe2XT52jZsw=",
"description":"MapR Base Edition",
"usageBasedLicense":"false",
"goldposixnodes":"10",
"isAdditionalFeature":false,
"deletable":false,
"grace":true,
"license":"version: \"4.0\"\ncustomerid: \"BaseLicenseUser\"\nissuer: \"MapR Technologies,
Inc.\"\nlicType: Base\ndescription: \"MapR Base Edition\"\nenforcement: HARD\ncapabilities {\n feature: MAXNODES\n name: \"Max Nodes in Cluster\"\n permission: ALLOW\n featureData {\n maxNodes: \"unlimited\"\n }\n}\ncapabilities {\n feature: MAPR_TABLES\n name: \"MapR Tables\"\n permission: ALLOW\n}\ncapabilities {\n feature: MAPR_TABLES_FULL\n name: \"MapR Tables Full\"\n permission: ALLOW\n}\ncapabilities {\n feature: MAPR_STREAMS\n name: \"MapR Streams\"\n permission: ALLOW\n}\ncapabilities {\n feature: POSIX_CLIENT_GOLD\n name: \"MapR POSIX Client for Containerized Apps\"\n permission: ALLOW\n featureData {\n maxNfsClientNodes: \"10\"\n }\n}\nhash: \"qJq+jZfykO+oPLkNlb33oE4NiZbgurgoQe2XT52jZsw=\"\n"
}
]
}
]$ maprcli license showid
id
5992638758152584922
[mapr@dragonpsqt1 logs]$

hiteshingole
HPE Pro

Re: CLDB is not starting in two nodes in three node cluster.

Hello Team,

The version you are using seems to have a community-based license that does not include the HA feature for cldb

 

 

I'm an HPE employee.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo