- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: SGLX A.12.80.06 - cmcheckconf return error wit...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2024 12:17 PM - last edited on тАО06-17-2024 08:48 PM by support_s
тАО06-17-2024 12:17 PM - last edited on тАО06-17-2024 08:48 PM by support_s
Hello,
we encountered the following error when executing the "cmcheckconf -v" command.
Begin miscellaneous cluster level checks
/var/tmp/master_cluster_script.env: line 3: SG_test-sg-rvc_NTP_SERVER=1: command not found
/var/tmp/master_cluster_script.env: line 4: SG_test-sg-vc_NTP_SERVER=1: command not found
WARNING: NTP Server does not seem to be configured on node(s): test-sg-rvc test-sg-vc
Applications that require time synchronization among nodes may not work properly
Miscellaneous cluster level checks completed [OK]
Having dealt with the scripts a little, we came to the conclusion that the problem lies in the hyphens in the cluster node names.
We created an environment file with variables and tried to accept variables from it in the shell.
For node test-sg-rvc, we replaced the hyphen with an underscore.
# cat /tmp/test.env
SG_NODES[0]=test-sg-rvc
SG_NODES[1]=test-sg-vc
SG_test_sg_rvc_NTP_SERVER=1
SG_test-sg-vc_NTP_SERVER=1
And we got a positive result for node test-sg-rvc, and a negative result for node test-sg-vc.
# . /tmp/test.env
If 'SG_test-sg-vc_NTP_SERVER=1' is not a typo you can use command-not-found to lookup the package that contains it, like this:
cnf SG_test-sg-vc_NTP_SERVER=1
# echo $SG_test_sg_rvc_NTP_SERVER
1
# echo $SG_test-sg-vc_NTP_SERVER
-sg-vc_NTP_SERVER
There is a clear discrepancy in the standards for naming variables in the shell and naming cluster nodes.
Has anyone encountered anything like this?
# cat /etc/os-release
NAME="SLES"
VERSION="15-SP5"
VERSION_ID="15.5"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP5"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp5"
DOCUMENTATION_URL="https://documentation.suse.com/"
# cmversion
A.12.80.06
Regards, YA.
Solved! Go to Solution.
- Tags:
- storage controller
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2024 02:45 PM
тАО06-17-2024 02:45 PM
SolutionYA, that is a very interesting problem you are seeing there but it appears to be a shell thing rather than a Serviceguard thing. You can actually see the problem with a very simple file.
[root@rhel8614 ~]# cat test.env
silly_name=2
silly-name=2
[root@rhel8614 ~]# . ./test.env
-bash: silly-name=2: command not found
[root@rhel8614 ~]# sh ./test.env
./test.env: line 2: silly-name=2: command not found
[root@rhel8614 ~]#
I did some digging around in the bash, built-in, and env man pages and could not find anything that looked like an explicit exclusion of hyphen (dash) being allowed but clearly it does not work and I did find this Stackoverflow thread that suggests you can't do it - https://stackoverflow.com/questions/61073688/how-to-use-in-the-name-of-a-bash-variable
If you can change your names without causing problems I would suggest you do that because I can see this kind of thing causing more problems down the line.
Mike
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2024 12:06 AM
тАО06-18-2024 12:06 AM
Re: SGLX A.12.80.06 - cmcheckconf return error with NTP checks
@Mike_Chisholm Mike, thank you very much.
Unfortunately, most of our cluster nodes have hyphens in their names and it is very difficult to rename them. For now, we will gently ignore this error when executing the cmcheckconf command.
What is the best way for us to do - write a request to change the product documentation according to the rules for naming cluster nodes, or open a case and wait for the decision of the development team?
Can you pay attention to this issue in the development team?
Regards, Yilmaz.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2024 03:51 PM
тАО06-18-2024 03:51 PM
Re: SGLX A.12.80.06 - cmcheckconf return error with NTP checks
Yilmaz,
Hi. My advice is to open a case with the support center (https://support.hpe.com) and explain the problem to them. Assuming the process works I will most likely get notified because I do see most of the Serviceguard cases that require a product defect to be filed and I am one of the people responsible for filing customer encountered defects. But to do that I need a support case ID to document that the issue was actually customer encountered.
If you can do that it will be helpful and feel free to private message me the case ID when you get it assigned and I will try to herd it a bit more closely through the process.
Of course I cannot guarantee a fix because that is a product development decision, not a support decision but I can get the defect logged and get you the defect ID so you can follow up in the future to check on progress if you would like. Fixed defects are documented in the Cumulative Update Release Changes for the Serviceguard release streams.
Mike