HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Problem regarding mysql cluster
Operating System - Linux
1827853
Members
1631
Online
109969
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-27-2010 05:49 AM
10-27-2010 05:49 AM
Problem regarding mysql cluster
Hi All,
I have implemented mysql cluster with configuration(1 management node, 4 data node, and 2 sql nodes).I stuck with a problem where my sql node is running but showing not connected on management client.
I have implemented it before with same configuration but now it is not working. Please help me out to resolve this. Thanks in advance.
==== ndb_mgm -e show ====
Connected to Management Server at: 192.168.38.87:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=2 @192.168.38.84 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0, Master)
id=3 @192.168.38.85 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
id=4 @192.168.38.86 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
id=5 @192.168.144.114 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.38.87 (mysql-5.1.35 ndb-7.0.7)
[mysqld(API)] 2 node(s)
id=6 (not connected, accepting connect from 192.168.38.87)
id=7 (not connected, accepting connect from 192.168.38.70)
===== My configuration are as follows: =====
Management node: 192.168.38.87
DataNode1: 192.168.38.84
DataNode2: 192.168.38.85
DataNode3: 192.168.38.86
DataNode4: 192.168.144.114
SQLNode1: 192.168.38.87
SQLNode2: 192.168.38.70
====== config.ini =====
#options affecting ndbd processes on all data nodes:
[ndbd default]
NoOfReplicas=2 # Number of replicas
DataMemory=1332M # ~1.3GB # How much memory to allocate for data storage
IndexMemory=300M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values.
TimeBetweenLocalCheckpoints=20
# TCP/IP options:
[tcp default]
portnumber=2202 # This the default; however, you can use any port that is free
# for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and allow the default value to be used instead
# Management process options SQL1:
[ndb_mgmd]
hostname=192.168.38.87 # Hostname or IP address of management node
datadir=/var/lib/mysql-cluster # Directory for management node log files
# Options for data node DN1:
[ndbd]
# (one [ndbd] section per data node)
hostname=192.168.38.84 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=2
# Options for data node DN2:
[ndbd]
hostname=192.168.38.85 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=3
# Options for data node DN3:
[ndbd]
hostname=192.168.38.86 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=4
# Options for data node DN4:
[ndbd]
hostname=192.168.144.114 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=5
# SQL node options:
#Option for SQL node SQLNode1:
[mysqld]
hostname = 192.168.38.87
id=6
#Option for SQL node SQLNode2:
[mysqld]
hostname = 192.168.38.70
id=7
========= my.cnf =============
[mysqld]
datadir=/usr/local/mysql-cluster-gpl-7.0.7-linux-i686-glibc23/data
basedir=/usr/local/mysql-cluster-gpl-7.0.7-linux-i686-glibc23/
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
# To allow mysqld to connect to a MySQL Cluster management daemon, uncomment
# these lines and adjust the connectstring as needed.
ndbcluster
ndb-connectstring="nodeid=1;host=192.168.38.87:1186"
#ndb-connectstring="nodeid=1;host=localhost:1186"
server-id=6
[client]
socket=/var/lib/mysql/mysql.sock
[mysql_cluster]
ndb-connectstring=192.168.38.87
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[ndbd]
# If you are running a MySQL Cluster storage daemon (ndbd) on this machine,
# adjust its connection to the management daemon here.
# Note: ndbd init script requires this to include nodeid!
#connect-string="nodeid=2;host=192.168.38.87:1186"
#connect-string=192.168.41.17
[ndb_mgm]
# connection string for MySQL Cluster management tool
#connect-string="host=localhost:1186"
connect-string="nodeid=6;host=192.168.38.87:1186"
I have implemented mysql cluster with configuration(1 management node, 4 data node, and 2 sql nodes).I stuck with a problem where my sql node is running but showing not connected on management client.
I have implemented it before with same configuration but now it is not working. Please help me out to resolve this. Thanks in advance.
==== ndb_mgm -e show ====
Connected to Management Server at: 192.168.38.87:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=2 @192.168.38.84 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0, Master)
id=3 @192.168.38.85 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
id=4 @192.168.38.86 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
id=5 @192.168.144.114 (mysql-5.1.35 ndb-7.0.7, starting, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.38.87 (mysql-5.1.35 ndb-7.0.7)
[mysqld(API)] 2 node(s)
id=6 (not connected, accepting connect from 192.168.38.87)
id=7 (not connected, accepting connect from 192.168.38.70)
===== My configuration are as follows: =====
Management node: 192.168.38.87
DataNode1: 192.168.38.84
DataNode2: 192.168.38.85
DataNode3: 192.168.38.86
DataNode4: 192.168.144.114
SQLNode1: 192.168.38.87
SQLNode2: 192.168.38.70
====== config.ini =====
#options affecting ndbd processes on all data nodes:
[ndbd default]
NoOfReplicas=2 # Number of replicas
DataMemory=1332M # ~1.3GB # How much memory to allocate for data storage
IndexMemory=300M # How much memory to allocate for index storage
# For DataMemory and IndexMemory, we have used the
# default values.
TimeBetweenLocalCheckpoints=20
# TCP/IP options:
[tcp default]
portnumber=2202 # This the default; however, you can use any port that is free
# for all the hosts in the cluster
# Note: It is recommended that you do not specify the port
# number at all and allow the default value to be used instead
# Management process options SQL1:
[ndb_mgmd]
hostname=192.168.38.87 # Hostname or IP address of management node
datadir=/var/lib/mysql-cluster # Directory for management node log files
# Options for data node DN1:
[ndbd]
# (one [ndbd] section per data node)
hostname=192.168.38.84 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=2
# Options for data node DN2:
[ndbd]
hostname=192.168.38.85 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=3
# Options for data node DN3:
[ndbd]
hostname=192.168.38.86 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=4
# Options for data node DN4:
[ndbd]
hostname=192.168.144.114 # Hostname or IP address
datadir=/var/lib/mysql-cluster # Directory for this data node's data files
id=5
# SQL node options:
#Option for SQL node SQLNode1:
[mysqld]
hostname = 192.168.38.87
id=6
#Option for SQL node SQLNode2:
[mysqld]
hostname = 192.168.38.70
id=7
========= my.cnf =============
[mysqld]
datadir=/usr/local/mysql-cluster-gpl-7.0.7-linux-i686-glibc23/data
basedir=/usr/local/mysql-cluster-gpl-7.0.7-linux-i686-glibc23/
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
# To allow mysqld to connect to a MySQL Cluster management daemon, uncomment
# these lines and adjust the connectstring as needed.
ndbcluster
ndb-connectstring="nodeid=1;host=192.168.38.87:1186"
#ndb-connectstring="nodeid=1;host=localhost:1186"
server-id=6
[client]
socket=/var/lib/mysql/mysql.sock
[mysql_cluster]
ndb-connectstring=192.168.38.87
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[ndbd]
# If you are running a MySQL Cluster storage daemon (ndbd) on this machine,
# adjust its connection to the management daemon here.
# Note: ndbd init script requires this to include nodeid!
#connect-string="nodeid=2;host=192.168.38.87:1186"
#connect-string=192.168.41.17
[ndb_mgm]
# connection string for MySQL Cluster management tool
#connect-string="host=localhost:1186"
connect-string="nodeid=6;host=192.168.38.87:1186"
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP