- Community Home
- >
- Storage
- >
- HPE SimpliVity
- >
- Problem with the federation of nodes.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-30-2023 03:54 AM - last edited on тАО12-07-2023 09:02 PM by support_s
тАО11-30-2023 03:54 AM - last edited on тАО12-07-2023 09:02 PM by support_s
I have two nodes running Simplivity on Lenovo servers, so I have no manufacturer support. I only use these nodes for a development environment. After a reboot of the vmware servers, we are faced with the following Simplivity status:
# svt-federation-show
.
node01 | OmniStackVC | Alive | xxxxx | xxxxx | xxxxxx | Release 3.7.10.200 | vSphere | System x3650 M5 | Connected |
node02 | OmniStackVC | Faulty | xxxxxx | xxxxxx | xxxxxx | Release 3.7.10.200 | vSphere | System x3650 M5 | Disconnected |
.
Launching the same command from the problem node:
# svt-federation-show
Error: Thrift::SSLSocket: Could not connect to xxxxx:9190 (Connection refused)
The svtfs service is not up:
# systemctl status svtfs@0
тЧП svtfs@0.service - SimpliVity OmniCube Instance 0
Loaded: loaded (/lib/systemd/system/svtfs@.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/svtfs@.service.d
тФФтФА01-svtfs-sva-additional.conf
Active: failed (Result: exit-code) since Thu 2023-11-30 08:13:31 UTC; 3h 22min ago
Process: 3611 ExecStopPost=/usr/share/simplivity/systemd/svtfs.sh poststop (code=exited, status=0/SUCCESS)
Process: 3457 ExecStart=/usr/share/simplivity/systemd/svtfs.sh start (code=exited, status=244)
Process: 2941 ExecStartPre=/usr/share/simplivity/systemd/svtfs.sh prestart (code=exited, status=0/SUCCESS)
Main PID: 3457 (code=exited, status=244)
Nov 30 08:13:31 omnicube systemd[1]: svtfs@0.service: Service hold-off time over, scheduling restart.
Nov 30 08:13:31 omnicube systemd[1]: svtfs@0.service: Scheduled restart job, restart counter is at 2.
Nov 30 08:13:31 omnicube systemd[1]: Stopped SimpliVity OmniCube Instance 0.
Nov 30 08:13:38 omnicube systemd[1]: svtfs@0.service: Start request repeated too quickly.
Nov 30 08:13:38 omnicube systemd[1]: svtfs@0.service: Failed with result 'exit-code'.
Nov 30 08:13:38 omnicube systemd[1]: Failed to start SimpliVity OmniCube Instance 0.
# tail -f $SVTLOG
2023-11-30T08:13:21.646Z INFO 0x7f6480af1880 [:] [storage.superblockmanager] superblockmanager.cpp:343 Message: Trying to read SB from device: /dev/sdc ... (from function readSuperBlocks)
2023-11-30T08:13:21.646Z INFO 0x7f6480af1880 [:] [storage.superblockmanager] superblockmanager.cpp:368 Message: VALID STATE: /dev/sdc (3DD57F0C035B2E3F5239BC40A39360991DB57370) State: 2, Usage: 2. Building SB. (from function readSuperBlocks)
2023-11-30T08:13:21.646Z ERROR 0x7f6480af1880 [:] [storage.superblockmanager] superblockmanager.cpp:423 Message: Valid superblock found, but could not validate signature! ... dropping device: /dev/sdc (Current: SIMPLIVT-564D7C69-08F6-E6BC-35E0-1F0CE66834FB /dev/sdc 600605b00c8c1e301ff7d62a207350e6 0x600605b00c8c1e301ff7d62a207350e6 ), (SBdata: SIMPLIVT-6EA56EA7-350A-4907-AABC-57323EC0588C /dev/sdc 600605b00c8c1e301ff7d62a207350e6 0x600605b00c8c1e301ff7d62a207350e6)...Continuing... (from function _buildSuperBlock)
2023-11-30T08:13:21.646Z INFO 0x7f6480af1880 [:] [storage.superblockmanager] superblockmanager.cpp:343 Message: Trying to read SB from device: /dev/sdd ... (from function readSuperBlocks)
2023-11-30T08:13:21.646Z INFO 0x7f6480af1880 [:] [storage.superblockmanager] superblockmanager.cpp:368 Message: VALID STATE: /dev/sdd (4447CA899CE55E23232E3E67B6E30D0BC071A7ED) State: 2, Usage: 1. Building SB. (from function readSuperBlocks)
2023-11-30T08:13:21.646Z ERROR 0x7f6480af1880 [:] [storage.superblockmanager] superblockmanager.cpp:423 Message: Valid superblock found, but could not validate signature! ... dropping device: /dev/sdd (Current: SIMPLIVT-564D7C69-08F6-E6BC-35E0-1F0CE66834FB /dev/sdd 600605b00c8c1e301ff7d8c74851aad1 0x600605b00c8c1e301ff7d8c74851aad1 ), (SBdata: SIMPLIVT-6EA56EA7-350A-4907-AABC-57323EC0588C /dev/sdd 600605b00c8c1e301ff7d8c74851aad1 0x600605b00c8c1e301ff7d8c74851aad1)...Continuing... (from function _buildSuperBlock)
2023-11-30T08:13:21.646Z ERROR 0x7f6480af1880 [:] [storage.devicemanager] devicemanager.cpp:441 Message: No Superblocks could be found (from function _getSuperBlockInfo)
2023-11-30T08:13:21.646Z ERROR 0x7f6480af1880 [:] [storage.devicemanager] devicemanager.cpp:311 Failed to get all devices for instance 0
2023-11-30T08:13:21.646Z FATAL 0x7f6480af1880 [:] [storage.devicemanager] devicemanager.cpp:163 Message: Build devices failed! (from function start)
2023-11-30T08:13:21.646Z FATAL 0x7f6480af1880 [:] [datapath.datapath] datapath.cpp:286 Error -1 starting Storage Manager
I have tried to solve it:
# dsv-digitalvault-init --hmsuser 'administrator@vsphere.local' --hostip xxxx --hostuser root
This will delete any existing digital vault records and reinitiate new ones, are you sure you want to proceed.
Proceed? (y/n): y
Password for HMS user administrator@vsphere.local :
Password for OmniStack host user root@xxxxxx :
2023-11-30 10:42:28Z Updating the postgres user mgmt_usr with new password
2023-11-30 10:42:28Z Updating the postgres user svtaggregator with new password
2023-11-30 10:42:28Z Initializing Digital Vault with postgres user mgmt_usr svtaggregator
Successfully reinitialized digitlaVault.
# dsv-update-vcenter --server xxxxxx --user 'administrator@vsphere.local' --password 'xxxxxxxxx'
Updating certificates ...
Successfully read certificates
Successfully parsed certificate
Thumbprint : xxxxxxxxxxxxxxxxxxxxxxxx
issuer=DC = xx, DC = xxxxx, CN = xxxxx
subject=DC = xx, DC = xxxxx, CN = xxxxxx
Valid From : Jul 29 08:57:05 2010 GMT
Valid To : Sep 7 13:03:05 2028 GMT
Serial Number : xxxxxxxxxxxxxxxxxxx
Accept the certificate: Y/[N]: Y
Error: java.net.ConnectException: Connection refused (Connection refused)
# svt-session-start --debug
Unable to contact host (xxxxxxxx) to get vCenter address.
Enter vCenter Server: xxxxxxxxx
Enter username: administrator@vsphere.local
Enter password for administrator@vsphere.local:
Authenticating user: identityadministrator@vsphere.localcredential
Releasing ticket: {SVT-T-TAG}xxxxxxxxxxx
Creating clear session environment settings command file
Received ticket: {SVT-T-TAG}xxxxxxxxxxx
Received HMS IP Addresses: xxxxxxxx
HMS: xxxxxxx
Creating update session environment settings command file
Authentication completed
Successful login of administrator@vsphere.local to xxxxx
There is an error in the output of the svt-update-center command, but it seems that the session against it is correct...I have also restarted the OVCs, the vmware nodes...
Any ideas? Thanks in advance.
Solved! Go to Solution.
- Tags:
- storage controller
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-05-2023 01:58 AM
тАО12-05-2023 01:58 AM
Re: Problem with the federation of nodes.
Hi,
There is a chance that there is a nostart file created in the OVC if the OVC was not shut down gracefully.
You can try locating the file in /var/svtfs/0/ or /var/svtfs/svt-hal/0
If you find the nostart file, then please remove the file using rm command and restart OVC.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2023 04:08 AM
тАО12-06-2023 04:08 AM
Re: Problem with the federation of nodes.
Hello,
Let me know if you were able to resolve the issue.
If you have no further query and you are satisfied with the answer then kindly mark the topic as Solved so that it is helpful for all community members.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-10-2023 11:38 PM - last edited on тАО12-11-2023 09:53 PM by Sunitha_Mod
тАО12-10-2023 11:38 PM - last edited on тАО12-11-2023 09:53 PM by Sunitha_Mod
Re: Problem with the federation of nodes.
@Aditya_A Hi,
thank you very much for the response. This file does not exist in the indicated paths.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2023 06:52 AM
тАО12-12-2023 06:52 AM
SolutionHello Paco3,
the problem is that the svtfs service cannot be started. I presume that there is a harddrive problem according to this error:
Message: Valid superblock found, but could not validate signature!
This shows thath there is a problem mounting the partition. Could you check if ILO shows an error regarding the drives?
Cheers,
Nick
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-14-2023 07:18 AM
тАО12-14-2023 07:18 AM
Re: Problem with the federation of nodes.
Hi, Nick,
Indeed, I have the problem with the RAID controller. Thanks for the help, I will try to solve the hardware problem.