MSA Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

Multi-Pathing help - SAN with EVA and MSA, UNIX/Windows

 
Darren Burke
Occasional Visitor

Multi-Pathing help - SAN with EVA and MSA, UNIX/Windows

Just need advice or recommendations
I have a SAN with two Brocade Silkworm 5000’s (fabric OS: v6.0.0b) each with 32 4Gb ports and about 90 populated.
All hosts and storage arrays are connected in redundant manner.

Connected to these switches I have
1 x EVA 8100
2 x MSA1000’s
1 x MSA1500
1 x Tape library (4 FC connections)
1 x Microsoft three node cluster (Active/Active/Passive)
15-20 x Microsoft Windows 2003 servers
2 x VMWare ESX 3.5 hosts
1 x Solaris 10 x86

All systems have two QLogic HBAs and of various models (2Gb and 4Gb)

No systems connect to both EVA and MSA storage.

All Windows systems run HP MPIO full featured DSM.

Question relates to how I should setup Path management on all systems? We have all windows systems set to SQST however VMware and Solaris are just setup to a very simple Active/Passive failover. I also read (from HP) that in active passive scenarios the lower the port number is on the Brocade switch determines the priority.

Reason I ask is that we have unexpected behavior of either the Windows or *nix systems while testing switch failure. Sometimes all Windows systems recover just fine and the *NIX systems loose paths and sometimes the opposite occurs. In many scenarios the EVA “locks” access to the LUNs and only a controller reset will release them. I am thinking that I need s standard mythology in setting up multi-pathing and how it fails over across the board. I.e. in my scenario I would want to make all systems Active/Passive with the same failback and path health check settings.

Can anyone weigh in on this and if this matters? I will give up i/o and bandwidth over availability, reliability and stability? Pointing me to any good white papers on setting up a heterogeneous SAN environment for the best availability.

Thank you
DMB