- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- HPE VSA Node Storage System Not Ready
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-21-2018 02:02 AM
09-21-2018 02:02 AM
We have a installation of a simple dual node VSA 2014 (12.6) cluster. In the last week we had a slight network issue and we had real latency on the iscsi interfaces.
Since this event the second node has the error Storage System not ready, but we have not iSCSI latencies anymore. The physical disks and the raid controller undeneath (Hyper-V installation) aren´t showing any errors or warnings.
In the logs store.error or store.info i can´t find any significant errors. We only get the following lines:
2018-09-20T11:00:19.186 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:creating volume=125 with ltime=INVALID sync=unknown
2018-09-20T11:00:19.187 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:creating volume=126 with ltime=INVALID sync=unknown
2018-09-20T11:00:26.524 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:-4945145778429638339_VSA2_00:15:5D:9E:0C:00:status inactive->starting (start devices)
2018-09-20T11:00:26.524 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:devices_start:begin
2018-09-20T11:00:26.524 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:allocating 802729972 bytes of memory for device meta data, meta_t 44 bytes)
2018-09-20T11:00:26.763 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:allocating 802729972 bytes of memory for device meta data, meta_t 44 bytes)
2018-09-20T11:00:27.004 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:device_status device_point='/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2'(1) inactive->loading
2018-09-20T11:00:27.004 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:load device device_point='/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2'(1) device='/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2'(71) replicate_header=f use_log=T use_log2=f journal={ofs=0x40000,len=0x0} meta={ofs=0x40000,len=0x0} log={ofs=0x40000,len=0xbb980000} log2={ofs=0xbb9c0000,len=0x0} data={ofs=0xbb9c0000,len=0x459845c0000} h2={ofs=0x45a3ffbbe00,len=0x0}
2018-09-20T11:00:27.004 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:Metadata load type: load_v2
2018-09-20T11:00:30.399 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE_JOURNAL_V2:/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2 end of journal found at ofs=0x00006e2f2400 seqno=8387231250170325100
2018-09-20T11:00:30.399 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE_JOURNAL_V2:/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2 load metadata complete
2018-09-20T11:00:30.399 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:load '/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2'(1) complete time=3.877
2018-09-20T11:00:30.399 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:device_status device_point='/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2'(1) loading->loaded
2018-09-20T11:00:30.497 VSA2 dbd_store[5042]: <notice>: : store_0::DBD_STORE:scan '/dev/disk/by-id/scsi-36002248042b68327b1b073c2379b9f4c-part2'(1) npage=18243863 starting
I need to to know if the networking issues can produce such errors in the storage manager on the vsa and if i can repair them with a simple stopping of manager and "repair Storage system"
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2018 07:32 AM
09-24-2018 07:32 AM
SolutionTo begin if it a new set up you need to ensure data disks attached to the VSA are not exceeding the threshold of 10 TB.
VSA going into not ready state can be because of many reasons and a few of them would be :
1. Network Issue
2. Underlying hardware issue VSA is deployed on.
3. VSA issue
Suggestions :
1) Check if you are able to ping the VSA in question from the CMC system and other VSAs successfully if there are any in the environment.
2) Verify that there is no network latency or any other network associated issues between the devices.
2) Check if there are any hardware issues on the nodes VSAs are hosted on.
3) Reboot the VSA to check if it helps.
NOTE : Pre requisites for the Suggestion :
A) Please check the quorum and ensure its met (minimum number of managers required to keep the volumes online) excluding the manager running on VSA in question as rebooting the VSA will bring the manager running on it offline.
If there any NR0 volumes and they might already be in inaccessible mode as they can't tolerate even one system going down or into not ready state so I don't think this is applicable.
Above mentioned steps are mandatory to check before you reboot the VSA.
Additionaly as asked by you repair is not an advisable step right now because putting the node in repair depends on different circumstances.
If none of the suggestions provided help, then please raise a case with HPE for assistance.