- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Strange NFS quorem witness connectivity/permis...
-
- Forums
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-01-2019 01:32 AM
02-01-2019 01:32 AM
Hi there,
We have used NFS quorem witness for our 2-node VSA (12.6.00.0155.0) on VMware vSphere 6.0 platform fine for many years until recently where the CMC reported NFS quorem witness failure as follows, yet we made sure the NFS share was fine and online:
Event: E0000040A EID_QUORUM_CONFIG_STATUS_LOCK_FILE_UNREACHABLE
Severity: Critical
Component: SAN/iQ
Object Type: Management Group
IP/Hostname: VSA2
Message: Quorum Witness shared lock file is not accessible. Possible causes include loss of network connectivity or missing lock file. Further errors could cause IO to stop.
As a troubleshooting measure, we tried removing the NFS quorem witness configuration from CMC and re-adding it. During re-adding, CMC reported the NFS share is unavailable and asked us to check connectivity or permission (full error message as follows).
Failed to connect to the host forthe Quorum Witness while storing configuration.
Please re-configure the Quorum Witness again.
Configuration failed, please check server connectivity or permissions.
Due to the lack of a NFS quorem witness or FOM, the below error message is also shown on CMC:
Event: EID_QUORUM_CONFIG_STATUS_MISSING_FOM_QW E00000409
Severity: Critical
Component: SAN/iQ
Object Type: Management Group
IP/Hostname: VSA2
Message: The management group 'Group_Name_Hidden' requires a Failover Manager (FOM) or Quorum Witness (QW). A management group with only 2 storage systems requires 3 regular managers. Add another storage system to management group 'Group_Name_Hidden' and ensure '3' regular managers are started, or add a FOM or QW to management group 'Group_Name_Hidden'.
This condition is unreasonable as we are sure the NFS share is fine as we can mount it from another Linux or Windows workstation under the same subnet (i.e. firewall is not the cause).
Additional things we verified:
- We tried rebooting the NFS server (CentOS 6.4) but have not tried rebooting the VSA nodes.
- NFS permissions are correct (readable and writable to VSA1 and VSA2) and the NFS server already uses the no_root_squash option in /etc/exports.
- We tried mapping the NFS share from the command line (CLIQ) on VSA1 and VSA2 and the error message is exactly the same as doing it from CMC GUI
- We tried the 'ping' CLIQ command utility towards the NFS server on both VSA1 and VSA2 and the results are successful
- We tried the 'nmap' CLIQ command utility towards the NFS server on both VSA1 and VSA2 and the NFS ports (111 and 2049) are correctly reported.
- We tried the 'netstat -tulpen' CLIQ command utility on VSA1 and VSA2 and the NFS server, despite already being unconfigured, indicates an ESTABLISHED state (even after the NFS server was rebooted)
- We tried the 'service --status-all' CLIQ command utility on VSA1 and VSA2 and it reported NFS mountpoints /mnt/tnqw are active (even after the NFS server was rebooted)
We can no longer add the NFS quorem witness. Is adding a FOM (Fail-Over Manager) temporarily and rebooting the VSA nodes the only solution in order for us to connect to the NFS again?
Any suggestion would be much appreciated.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-03-2019 02:55 PM
02-03-2019 02:55 PM
SolutionWe had the same issues with the NFS witness shares. In a lot of cases we went back to a full FOM which isn't always possible. Check the latest LHOS patches, there was a recent patch related to NFS witnesses. Worst case you might need to stand up a temporary FOM to patch before you can use the NFS witness again.
Good luck!
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2021 Hewlett Packard Enterprise Development LP