- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Multi VIP whitepaper
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-07-2011 04:06 AM
тАО02-07-2011 04:06 AM
Multi VIP whitepaper
Does anybody know if there is a whitepaper on how to setup Storage Nodes and the Multi Site Cluster using multiple subnets?
If so, can you provide me a link to this whitepaper?
Thnx ahead, Richard.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-07-2011 04:36 AM
тАО02-07-2011 04:36 AM
Re: Multi VIP whitepaper
here is a link:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdf
tell us if is valid for you.
Regards, Jorge
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-07-2011 05:25 PM
тАО02-07-2011 05:25 PM
Re: Multi VIP whitepaper
Please feel free to post your specific configuration/layout if you would like us to review some topologies for you.
One thing to keep in mind if you are a vSphere 4.x environment:
With P4000 Multi-Site-SAN today you cannot use the VMware iSCSI multipathing because they don't support VIP routing.
Hopefully HP will make use of the vStorage API and allow proper multi-pathing another way.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-08-2011 12:56 AM
тАО02-08-2011 12:56 AM
Re: Multi VIP whitepaper
Thnx again, Richard.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-01-2011 05:40 AM
тАО03-01-2011 05:40 AM
Re: Multi VIP whitepaper
We have 2 datacenter with 10Gb interconnect and a third site, the 2 dc's have 3 P4500 nodes and 5 ESXi 4.1U1 nodes each, in the third site we have a fom server.
We have 4 pnics on ESXi in a vswitch for sw iscsi and would like to use multipathing. On the SAN we configured three sites as the DC layout with 2 vip's both in a stretched vlan. We are also running against the vip routing problem icw the gateway P4500 node.
As i found it very difficult to find out the right configuration for us, and the problem with vip routing is there any clear documentation what the best architecture is for this kind of configurations?
Regards,
martijn
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-01-2011 06:23 AM
тАО03-01-2011 06:23 AM
Re: Multi VIP whitepaper
Multipath iSCSI or Multi-Site SAN for fastest failover/recovery.
We have witnessed instances where a customer installed SANs into multi sites, but not in a true multi-site configuration (no multiple VIPs). The problem was that during a site link failure, sensitive apps like databases/Exchange ocassionally failed (timed-out) before the VIP failed over to another node.
With the Multi-VIP configuration the vSphere server would failover to the 2nd subnet/VIP almost instantly.
I would lean towards true Multi-Site SAN, as in most of my implementations the customers' iSCSI data usage is so low that a single GbE is fine, and multipathing wouldn't give any significant performance benefit (over a 2 VIP Multi-Site SAN config)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2011 01:02 AM
тАО03-02-2011 01:02 AM
Re: Multi VIP whitepaper
So to summarize, with the P4500 and ESX you have to option to do multi-site san without multipathing witch is the preferred architecture or do single VIP single site SAN with ESX multipathing.
The limitation for the first option can be found in the Multiple VIP routing problem wihin ESX.
Is there any idea when this will be possible, as i have the feeling that multiste SAN with multipathing ESx would be the ideaal configuration at the end.
We also tried to configure a 4 pNic iSCSI vSwitch with 2 nics in each subnet of the multisite san with multipathing enabled. We added both VIP's to the SW initiator.
This also seems to work, i see 2 paths for each lun only some lun's are connecting to a gateway SAN node in the other site.
What would be the disadvantage for this configuration?
In general i think all the possible options, considerations and best practices are poorly documented for the Lefthand SAN. so this could be improved maybee.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2011 02:42 AM
тАО03-02-2011 02:42 AM
Re: Multi VIP whitepaper
We have split the 10.0.2.x network in two subnets (/128) and use VLAN102 stretched on both datacenters.
We configured the SAN as multisite with 2 vips one in each subnet and configured sites and a third logical site with a FOM.
In each site there are 5 ESXi servers with a vSwitch woth 2 nic's witch are not configured for multipathing. Ive created a vmk portgroup with an ipadres in the subnet for that site and added the 2 VIP's to the sw iscsi hba.
The ESX servers also have a vmk portgroup for management in another vlan and network.
After a rescan i can see all the lun's with 2 paths each.
When i check the iscsi connections for the ESXi perspective (esxcli swiscsi list) i can see the initial, current and remote address as the example below
#
iqn.2003-10.com.lefthandnetworks:mgt01:407:esx-file01
target: iqn.2003-10.com.lefthandnetworks:mgt01:407:esx-file01
-session_isid: [00:02:3d:00:00:09]
-authMethod: NONE
-dataPduInOrder: true
-dataSequenceInOrder: true
-defaultTime2Retain: 0
-errorRecoveryLevel: 0
-firstBurstLength: 262144
-immediateData: true
-initialR2T: false
-maxBurstLength: 262144
-maxConnections: 1
-maxOutstandingR2T: 1
-targetPortalGroupTag: 1
--connectionId: 0
--dataDigest: NONE
--headerDigest: NONE
--ifMarker: false
--ifMarkInt: 0
--maxRecvDataSegmentLength: 131072
--maxTransmitDataSegmentLength: 131072
--ofMarker: false
--ofMarkInt: 0
--Initial_Remote_Address: 10.0.2.149
--Current_Remote_Address: 10.0.2.131
--Current_Local_Address: 10.0.2.170
--State: LOGGED_IN
#
This looks okay, the VIP local to the ESX server is used, same for the SAN node and the ipadres configured for iscsi is used also.
In CMC the gateway san node is also local to this ESX server
However another session looks like this
#
iqn.2003-10.com.lefthandnetworks:mgt01:145:esx-lun04
target: iqn.2003-10.com.lefthandnetworks:mgt01:145:esx-lun04
-session_isid: [00:02:3d:00:00:09]
-authMethod: NONE
-dataPduInOrder: true
-dataSequenceInOrder: true
-defaultTime2Retain: 0
-errorRecoveryLevel: 0
-firstBurstLength: 262144
-immediateData: true
-initialR2T: false
-maxBurstLength: 262144
-maxConnections: 1
-maxOutstandingR2T: 1
-targetPortalGroupTag: 1
--connectionId: 0
--dataDigest: NONE
--headerDigest: NONE
--ifMarker: false
--ifMarkInt: 0
--maxRecvDataSegmentLength: 131072
--maxTransmitDataSegmentLength: 131072
--ofMarker: false
--ofMarkInt: 0
--Initial_Remote_Address: 10.0.2.149
--Current_Remote_Address: 10.0.2.1
--Current_Local_Address: 10.0.1.10
--State: LOGGED_IN
#
This session uses the VIP local to the ESX server but eventually connects to the SAN node in the remote subnet and uses the management ipadres of the ESX server.
We also have lun's that use the remote vip as the initial address and the same current addresses as above.
According to our understanding, the benefit of multisite with 2 vip's and sites is local I/O from the ESX to the SAN, how can we accomplish this with this configuration?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2011 08:20 AM
тАО03-02-2011 08:20 AM
Re: Multi VIP whitepaper
My best guess, confirmed informally with lefthand support, is this has to do with SCSI-2 reservations, lock contention and making sure the writes happen in the correct order.
Windows, if you're using the Lefthand multipathing solution, is the only platform that works as you described above, since it doesn't have a single gateway for any given iSCSI target. Note in the multisite SAN docs they refer to "Application Servers" in certain places. That's Windows.
You can get yourself in a trap. Given that the VMware iSCSI initiator doesn't support routing, if you only have one adapter in each iSCSI subnet, a link failure (cable, NIC, switch reboot) will cause that server to lose the storage in one subnet or the other. You'd need to configure failover adapters on each vmk iscsi portgroup - if you've got two subnets in a single vlan, you could easily use the primary vmknic for one portgroup as a standby adapter for the other. Or you can double your vmknic count and use two per subnet, each connected to a different switch.
Or you can manually define gateways in the service console OS on an ESX server so the iSCSI traffic can be routed. That may not work in ESX 5, and if they don't support having multiple gateways for vmkernel interfaces, you'd be sunk. Also, there's a lot of advice out there about not routing iSCSI.
I just went through a series of calls with HP support working this all out over the past few weeks when I realized it wasn't quite working the way I expected it to. They've actually recommended that I use a single subnet/single VIP configuration, since almost every other failure mode (servers, links, switches, etc) is more likely than an entire site failure. And a site failure is only a problem if the VIP happens to be on that site at any given point. I've noticed in practice that if you change a storage node's IP config (I had to change the gateways at one point), the VIP seems to move very quickly.
Since I already have the storage nodes configured and they're in A/B/A/B/A/B/A/B order in the cluster, my replication pairs are already established. I should be able to just edit the IP address and gateway on each storage node I want to change, and delete the second VIP. I just need to shut everything down to do it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2011 11:36 AM
тАО03-02-2011 11:36 AM
Re: Multi VIP whitepaper
I was wondering that could you use multipathing in this scenario:
Multi-site (A and B) with 2 VIPs. Add ONLY site A VIP to discovery for site A ESX hosts. And site B vice versa. THEN use VMware HA+FT to keep VMs alive.
This way you only use iSCSI targets that are in same subnet and you could enable round-robin. I can't see why this would not work.
Yes, for that site hosts you loose access completely to storage if ALL storage nodes goes down in that site. That is rare case.
-Olvi