- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- vSphere HA and "split brain"?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-08-2010 08:19 AM
тАО10-08-2010 08:19 AM
Site A
Server A
FOM
Storage Cluster Nodes (group IP 1.2.3.4)
|
Site B
Server B
Storage Cluster Nodes (group IP 1.2.3.4)
If the link dies all the kit is still up.
The FOM gives quorum to Site A and in Site B the storage goes offline.
What will vSphere HA do though? With only two servers can it be configured so that Site A takes quorum and starts the VM's that were running in Site B?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-09-2010 06:28 AM
тАО10-09-2010 06:28 AM
Re: vSphere HA and "split brain"?
Site A
Server A
FOM
Storage Cluster Nodes (group IP 192.168.0.1)
|
Site B
Server B
Storage Cluster Nodes (group IP 192.168.1.1)
Now it seems it gets really confusing as vSphere apparently can't do iSCSI multi-pathing to anything other than its local subnet, so whilst I believe I can set the vSphere iSCSI initiator discovery list to both 192.168.0.1 and 192.168.1.1, I can't use multi-path?
Really appreciate some clarification here as two locations each with some P4000 and each with one or more vSphere hosts seems the most simple thing to want to do, so I'm obviously missing something simple.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-09-2010 08:10 AM
тАО10-09-2010 08:10 AM
SolutionvSphere HA should detect the VM failure and start them on Site A.
And Yes, best practice for a multi-site SAN would be to create 2 subnets, so you maintain 2 VIPs. (then add BOTH VIPs to ALL VMware servers in the cluster). But you are correct that there is a trade-off, you will give up vSphere Multi-Path if you use the HP LeftHand Multi-Site-SAN configuration.
I configured this for a hospital, but we actually created 2 multi-site clusters. One where the FOM was in Site A, and the 2nd where the FOM was in Site B. This way EACH site had it's own storage. Servers that should stay up in Site A would live on Cluster A, which (in a link failure) should stay up in Site A, and vice-versa for Site B.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-09-2010 08:22 AM
тАО10-09-2010 08:22 AM
Re: vSphere HA and "split brain"?
So if we're "only" looking at 2-3 nodes per site (probably a mix of 15k and 7k SAS nodes) what is the best way to go about getting performance with a reasonable level of automation?
The P4000 seems the simple bit, vSphere is where I'm struggling.
How feasible is it to just stick a 10gbps link between the switches in each site and use a single subnet?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-09-2010 09:08 AM
тАО10-09-2010 09:08 AM
Re: vSphere HA and "split brain"?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-09-2010 04:31 PM
тАО10-09-2010 04:31 PM
Re: vSphere HA and "split brain"?
1) Volume is striped in-so-that there are redundant blocks, but they are always across the WAN link in the 2nd site. If you lost 1 storage node at Site A, you could have a little wait time while the VIP detected failure and moved from 1 node to another. This is most noticeable with SQL/Exchange and Virtual disks. The time for the VIP to detect failure, and migrate may exceed the timeout of the iSCSI connection, thus loosing a storage connection.
If you have 2 VIPs listed in the VMware iSCSI setup, it has a 2nd path to connect to storage. If VMware detects a loss of storage path it will immediately retry on a 2nd path (which is over the WAN) and it should find the Site B VIP and be re-directed to the copy of the data on a module at SITE B
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-10-2010 01:10 AM
тАО10-10-2010 01:10 AM
Re: vSphere HA and "split brain"?
Site A
Server A
Switch A (VIP 192.168.0.x)
VLAN1
VLAN2
|
|Link with VLAN tagging for VLAN1 and VLAN2
|
Site B
Server B
Switch B (VIP 192.168.1.x)
VLAN1
VLAN2
And to have both VLANS span both switches (so ideally 10gbps but maybe 2x1gbps minimum link).
That way each server can connect to storage in either site and (hopefully) there is enough bandwidth between sites for both replication and iSCSI traffic.
I'm not familiar enough with failures of storage on vSphere to know how instantly/quickly any switchover would be?
For things like Exchange/SQL and File Server data I'm thinking I'm most likely to be using the P4000 MPIO within the Windows VM's so I can take
application aware snapshots of Exchange/SQL.
Sound like a sensible plan?