- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Multiple VIPs on VSA cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2009 02:25 PM
тАО05-25-2009 02:25 PM
Multiple VIPs on VSA cluster
Is it supported to have multiple VIPs on a VSA cluster and if so is there any potential performance or resilience gain? I've got a 2 VSA cluster with a FOM on a third ESX host and when I drop the VSA holding the VIP VMware HA doesn't restart the VMs running on the iSCSI datastore as the remaing VSA in the cluster takes 15 or 20 seconds to claim the VIP, and the volume goes offline because of this while HA is trying to bring up the VMs.
So I'm wondering if I create multiple VIPs will this help with resilience while also doing an element of multipathing?
Cheers
DB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 04:09 AM
тАО05-26-2009 04:09 AM
Re: Multiple VIPs on VSA cluster
But what you describe means that something is mis-configured. VSA is designed, tested, and certified with VMware to do exactly what you are trying. Do you have support? I'd suggest calling to see what's up with your failover not working right.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 06:38 AM
тАО05-26-2009 06:38 AM
Re: Multiple VIPs on VSA cluster
With regards to the VIP timeout, this is an eval of the enviroment at the moment so don't have support. Am going to test this again though as the VSA I dropped has now developed a problem, storage server not ready showing on it for a while and I tried to repair it but it had a bit of a fit because I never set the 2-way volume it was part of the redundancy for to 0-way it seems. Also possible that the network here s causing a problem, Will post back when I've retested if I find anything interesting.
Cheers
DB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-27-2009 04:17 AM
тАО05-27-2009 04:17 AM
Re: Multiple VIPs on VSA cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2009 11:08 PM
тАО06-18-2009 11:08 PM
Re: Multiple VIPs on VSA cluster
So to re-state that I would propose to have two paths for my iSCSI vmhba:
1. a. 10.0.0.1 - vmk_iscsi_1 - first ESX iSCSI vmkernel port
b. 10.0.0.10 - vsa_nic1 - first VSA nic
c. 10.0.0.20 - SC_net_1 - first ESX COS port
d. 10.0.0.30 - VIP_1 - first VIP
2. a. 10.1.0.1 - vmk_iscsi_1 - first ESX iSCSI vmkernel port
b. 10.1.0.10 - vsa_nic1 - first VSA nic
c. 10.1.0.20 - SC_net_1 - first ESX COS port
d. 10.1.0.30 - VIP_1 - first VIP
I then add the two VIPs as the target discovery addresses on the ESX host and this hopefully gives me two paths to each volume? My interest here is in pushing redundancy and performance as much as possible as with two paths I can then also be certain that multiple 1gbps uplinks can be used by my ESX iSCSI initiator.
Cheers
DB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-19-2009 10:28 AM
тАО06-19-2009 10:28 AM
Re: Multiple VIPs on VSA cluster
Cheers
DB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-12-2012 02:16 PM
тАО10-12-2012 02:16 PM
Re: Multiple VIPs on VSA cluster
Hello Dodib,
Yes dual subnet/dual VIP multi-site clsuters are supported on LeftHand OS, however per the P4000 Multi-Site Config Guide:
When using VMWare ESX 4 or higher and its native MPIO in a Multi-Site SAN, you cannot configure
more than one subnet and VIP. Multiple paths cannot be routed across subnets by the ESX/ESXi 4
initiator. With VMware ESX, the preferred configuration is a single subnet and with ESX servers
assigned to sites in SAN/iQ.
so assuming your not using ESX3 it is unsupported for ESX.