HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

Linux (Citrix XenServer) Multipathing to P4000 / P4500

Clayton Dillard
Occasional Visitor

Linux (Citrix XenServer) Multipathing to P4000 / P4500

Does anyone have any experience setting up proper DM Multipathing from a Linux server or XenServer host to volumes on a P4000 cluster?

HP support says they only support the DSM for Windows. With Linux DM Multipathing though it seems logical that there are customers who want and need to connect their Linux servers to P4000 storage volumes where connections are made to each node in the cluster for performance and fault-tolerance reasons.

Any help on this would be very much appreciated.
9 REPLIES
Steve McGee
Occasional Advisor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

I can't see how this would differ much from the methodology used by VMware ESX Servers..

You'll have to refer to your linux documentation for bonding your NICs and identifying a round robin or link aggregation to achieve your performance and fault tolerance requirements.
Clayton Dillard
Occasional Visitor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

Not sure you understand the issue. Windows hosts with the DSM establish connections to each node in the HP/Lefthand cluster whereas Linux hosts, and XenServer, establish connections to only one host in the SAN cluster. This is what I meant by MPIO. We have all of the network redundancies we need. What we need now is MPIO so that if a storage node goes down (i.e. we're doing a rolling SAN/iQ upgrade) we don't kernel panic our Linux servers and the VMs that are running on XenServer due to interruptions in storage I/O.
Mark Wibaux
Trusted Contributor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

My understanding is that you should be pointing your iSCSI initiator at the Virtual IP of the Lefthand cluster and let that handle balancing the connections between the nodes.
kghammond
Frequent Advisor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

We just setup our first XenServer recently as well with a Lefthand. It appears that the multi-pathing in XenServer only works if the two XenServer management interfaces are on seperate subnets. I don't have any confirmation of this but when we configured two management interfaces on the same subnet and enabled multi-pathing, XenServer only established one path on the first management interface.

I suspect XenServer would only attempt to establish a second path if it was on a different subnet.

Unfortunately, vSphere only supports Round-Robin MPIO if both interfaces are on the same subnet. So it seems as of right now, it is impossible to have a XenServer farm and a vSphere farm both configured for MPIO to the same LeftHand cluster.
Clayton Dillard
Occasional Visitor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

I've attached a diagram of our setup that has been running in production for nearly 2 1/2 years. We get good I/O performance on our VMs and our network paths to the SAN cluster are redundant so we can withstand switch failures, cable failures, etc.

What I'm looking to protect against are issues with our VMs resulting from either a single SAN node failure or in scenarios where we want to do a rolling SAN/iQ upgrade and not have to shutdown all of our Linux servers and VMs to avoid I/O errors and subsequent kernel panic that seems to be the case when either scenario occurs.

I know that HP does not offer a DSM for Linux [read XenServer] but for fault tolerance it seems necessary to have connections from each XenServer to each node in the HP SAN cluster as is the case with a Windows box running the HP DSM. This should also improve I/O performance for our VMs.

We always create our XenServer SRs (Storage Repositories, or disks for VMs) by pointing at the P4000 SAN virtual IP and the XenServer "Server" objects in the HP CMC are configured for "Load Balancing".

Our XenServer hosts use the built-in software iSCSI initiator, not an iSCSI HBA.

In a world where most data centers have critical workloads that run on Linux, whether virtualized by VMware or XenServer or KVM, or running on traditional bare metal, it seems like HP would (and maybe this takes working with OS and hypervisor vendors) create a DSM for Linux so that the same performance and fault tolerance levels can be achieved when connecting to the P4000 storage arrays.
José M Pérez Bethencour_1
Occasional Visitor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

I have the same problem, I have the XenServer pool pointing to the Virtual IP and expect the cluster to handle node failures or planned downtime (patching, reboots). But I have experienced several times disruptions to storage I/O, with the XenServer hosts losing connectivity to a subset of the disks (those hosted on the offending node I presume).

 

I have read all the documentation I have found on this matter and can't diagnose the problem. Right now I'm triple-checking network problems related to one node, we are monitoring connectivity with nagios and ping response from that node sometimes fails with no explanation (entering the console and re-entering same network settings repairs it, many times it just flaps during some minutes and automagically returns). There's nothing obvious in switch port counters etc. Also there's no log in CMC or nodes related to network problems. Seems like an odd issue related to routing on the node (monitoring is from a separate subnet).

 

Trying to solve the ping problem on that node, I reimaged the node and on joining the cluster it caused lost connectivity of a subset of the virtual disks, causing a absolute chaos.

 

I'm also dissapointed with performance of 10Gb CX4 option of P4300G2, I see no benefit at least in reference to XenServer and 1Gb connectivity.

 

Mi setup is an redundant network layer, XenServer hosts are on a c7000 enclosure with Virtual Connect Flex10 connectivity, and three P4300 G2 nodes with the 10 Gb CX4 option. All patched etc and still worried...

 

If someone can confirm high availability of a XenServer pool on a P4000 cluster I beg for the config...

 

 

Jitun
HPE Pro

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

P4000 SAN does not support XenServer MPIO Configuration

Please confirm it from OS Compatibility
http://spock.corp.hp.com/Pages.internal/spock2Html.aspx?htmlFile=hw_iscsi.internal.html

Yes, the difference in 10Gb and 1 GB is not great.

Please check the iscsi timeout value
This is configured in /etc/iscsi/iscsid.conf

Please find the Best practices for deploying Citrix XenServer
on HP StorageWorks P4000 SAN

http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-3524ENW.pdf

And
Performance and characterization of Citrix
XenServer on HP BladeSystem

http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-1909ENW.pdf
--------------------------------------------------------------
How to assign points? Click the KUDOS! star!
robyaps
Advisor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500

I read http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-3524ENW.pdf for network iSCSI connection, but in general for any link aggregation (bond) configuration, HP say to use Link Aggregation connecting NIC on the same switch. We have 2 Procurve 3500yl that support Distributed Trunking (DTIP).

 

We connected Xenserver NIC in bond and HP LeftHand in Link Aggregation Dynamic Mode with 2 NIC, each NIC on one 3500yl configured with Distributed Trunking (ISC, DT, peer-keepalive). So on different switch but with Distributed Trunking configured should be considered as one entity coordinating link aggregation as one. Connections seams to be stable. Is this a supported configuration ? can anyone tell us if some problems will be arises ?

 

Best Regards

oikjn
Honored Contributor

Re: Linux (Citrix XenServer) Multipathing to P4000 / P4500


kghammond wrote:
We just setup our first XenServer recently as well with a Lefthand. It appears that the multi-pathing in XenServer only works if the two XenServer management interfaces are on seperate subnets. I don't have any confirmation of this but when we configured two management interfaces on the same subnet and enabled multi-pathing, XenServer only established one path on the first management interface.

I suspect XenServer would only attempt to establish a second path if it was on a different subnet.

Unfortunately, vSphere only supports Round-Robin MPIO if both interfaces are on the same subnet. So it seems as of right now, it is impossible to have a XenServer farm and a vSphere farm both configured for MPIO to the same LeftHand cluster.

I'm a 100% M$ shop so I can't say this will work or not, but If esx needs one subnet but xen needs two, you could fake the two subnets by using one physical/virtual network and use a larger subnet (say /23 subnet) an configure esx and the HP network for that and then just configure the xen network for something smaller (say /24, but make sure to add a 2nd gateway for the non-covered /24 if you need it).  It would be a mess to keep track of, but it is a way to have one set of computers think you have different subnets while the others don't and yet let all NICs still communicate across.