- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- VIP's in Multisite cluster, explanation
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2012 08:50 AM - edited 02-06-2012 08:51 AM
02-06-2012 08:50 AM - edited 02-06-2012 08:51 AM
VIP's in Multisite cluster, explanation
Guys,
can someone summarize me what will happen to file server in multisite configuration that is configured with two VIP’s (one per subnet).
So for instance we have:
Site A in subnet 192.168.1.0/24, VIP-A address 192.168.1.10 with cluster storage units S1 192.168.1.20 and S3 192.168.1.21
Site B in subnet 192.168.2.0/24, VIP-B address 192.168.2.10 with cluster storage units S2 192.168.2.20 and S4 192.168.2.21
We have FA agent installed so it will see if one of the sites fails.
So we have a server which is in 192.168.1.0 (site A) subnet with ip address .100. iSCSI target for server is configured to 192.168.1.10 and file share volume is discovered (VOL_SHARE). iSCSI gateway is chosen depending on the load balancing when server is started (it will be either .1.20 or .1.21). VOL_SHARE is in Network Raid 10. And now SAN Site A fails completely, due to hardware accident. File server was online and connected to gateway while this happened. Site B continues to function without a problem.
QUESTION:
What will happen to file server that expects iSCSI respond from 192.168.1.10 (actually from iSCSI gateway) that is now down? How should server be configured to contiune to run normaly if site failes?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2012 07:19 PM
02-06-2012 07:19 PM
Re: VIP's in Multisite cluster, explanation
The key to this question is in the proper setup of the sites in the Multi-San site. Its the sites and the VIP-LB (virtual IP Load balancing. I/O path preferencing based on site/subnet so the application you describe will automatcly load balance and select a new gw. VIP-LB simply terminates the iSCSI session on local Storage Systems. With the proper setup you should have the data replicated on the B side. Make sure you use the wizard to setup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-07-2012 12:28 AM - edited 02-07-2012 12:34 AM
02-07-2012 12:28 AM - edited 02-07-2012 12:34 AM
Re: VIP's in Multisite cluster, explanation
Emilio, than you. Where should I/O path referencing be configured? In Windows OS in this case or on SAN?
This question is hypothetical, situation does not exist. We are thinking on buying multisite but this scenario is never explained. Can you give me maybe some pdf where this is explained. I do understand what will happen if I have only one subnet, but I do not understand in two subnet scenario. Because you can't move VIP from subnet A and just "host" it on B. Router will not route traffic to it, it's simple as that.
Can you possibly explain, if everything is properly configured what will happen with this scenario if one unit fails on site A and both units fails on site A?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-07-2012 12:41 AM
02-07-2012 12:41 AM
Re: VIP's in Multisite cluster, explanation
I found it, "Failover No load balancing is performed. The application specifies a primary path and a set of standby paths. The primary path is used for processing device requests. If the primary path fails, one of the standby paths is used. Standby paths must be listed in decreasing order of preference (the most preferred path first)."
So actually if Windows client connected on Site A detects path failure it will just switch to secondary path. So clinets are the on that will do failover, and SAN is responsible only to VOL synchronization.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-07-2012 09:47 AM
02-07-2012 09:47 AM
Re: VIP's in Multisite cluster, explanation
IF you are on the same subnet (such as myself), I used the HP DSM provider to handle the local/remote node connections.
My setup:
five VSA nodes, three in one server room and two in another on the other side of the building. Rooms where connected through a single trunked pair of switches. My FOM is in the same room as the three nodes in the main server room. I have a single subnet for all iscsi traffic. and all five VSAs are in the same cluster.
I assigned the FOM to site "sdfsduhkf" (name doesn't matter), then I assigned the free VSAs in the main room to site "server room". I also assigned all my servers in that room to the same site. I then assigned the two remaining VSAs to site "backup site".
I connected the iSCSI connections to each lun as instructed using the HP DSM guide and the result is that the servers only connect to the VSAs on the same site as they are assigned (in a failover situlation they automatically connect to the remote site VSAs).
I don't know if there is that option for VMWare, but in windows with the HP DSM, this is pretty smooth and easy. Just search for and read the documents on HP DSM.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-09-2012 02:59 AM
02-09-2012 02:59 AM
Re: VIP's in Multisite cluster, explanation
As far as I know ther is no multipathing plugin for vSphere. Per volume only one gateway not like in Windows all nods in cluster (due to MPIO).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-09-2012 07:04 AM
02-09-2012 07:04 AM
Re: VIP's in Multisite cluster, explanation
I think you are correct about the vmware limitation. I really only deal with windows iscsi and their MPIO/DSM features work great.
I can't comment about what to do for VMWare, but I"m sure there has to be something like MPIO functionality
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-09-2012 01:38 PM
02-09-2012 01:38 PM
Re: VIP's in Multisite cluster, explanation
Hi,
There is currently only a DSM available for Windows, not for VMware. It is already a long time demanded/awaited but untill further notice it isn't there yet...
However, what you can do to obtain the best performance with VMware and Lefthand is enabling load balancing on all paths.
All details in the following document
http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA3-6918ENW.pdf
Can tell you that it works fine, easy to setup (in v5, v4 requires some command line actions) and gets most out of your storage nodes...
Kr,
Bart
If my post was useful, clik on my KUDOS! "White Star" !