1753792 Members
7124 Online
108799 Solutions
New Discussion юеВ

EVA lun path question

 
SOLVED
Go to solution
David P Lavoie
Frequent Advisor

EVA lun path question

Hello everyone,

I'm still learning my way through HP UX and I got a questions for you guys. We're setting up our environment right now with an rx6600 Itanium with 11.31 and EVA 6100. All my luns are showing 8 paths, 4 active and 4 standby. All active paths are getting on the same controller but using different fabric.

Why there's 4 paths on standby. Is there a way to use all 8 of them (if it's recommended)? If not, can I change the active path used to balance it on both controller?

I must specify that no preferred path is configured on the EVA or on the lun. You'll find attached to this post the result of scsimgr get_info all_lpt -D /dev/rdisk/disk118 (one of the SAN lun)

Thank you to everyone...
7 REPLIES 7
TTr
Honored Contributor

Re: EVA lun path question

Typically with dual controller arrays and dual fabric set-ups the number of active and standby paths is the same. As to why you have 4 it has to do with the zoning and your fabric setup.
You should use all 8 paths for each device. In most cases though if a failure occurs multiple paths will disappear at the same time.
> All active paths are getting on the same controller but using different fabric.

Not sure what you are saying here but you should ensure that i/o is balanced on both (all) server HBAs by splitting the active paths to both (all) server HBAs. Or use the new disk management that is available with HP-UX 11.31
Paul Maglinger
Regular Advisor

Re: EVA lun path question

I don't have the details of your fabric, but I believe at most you should see 4 paths, not 8. You might want to check how your virtual disks are presented from your EVA to your server.
David P Lavoie
Frequent Advisor

Re: EVA lun path question

Ok, I'm going to clarify what we have. The SAN is connected on a director switch, which is divided into 2 virtual switch. The server got to 2 HBAs of 2 ports, each HBA got 1 port connected on fabric A and one on fabric B.

On the SAN, each controller got 2 ports, one on fabric A and one on B. If I compute correctly, that makes 8 paths.

The load_bal_policy is least_cmd_load. When transfering data on the SAN, all 4 ports are used but all getting on controller A. I want to balance the load on the 2 controller.
TTr
Honored Contributor

Re: EVA lun path question

Based on what you say you have 4 FC cables from the server connecting 2 straight and 2 crossed to the directors and you have the same cabling from the array to the directors then the 8 total paths are justified. In fact if you zone each server port to both array ports you can have more than 8 paths or the same 8 paths with less cabling.

> all 4 ports are used but all getting on controller A

Is that controller A of the EVA? Unless you have a low end EVA, check the LUN presentation and controller ownership on the EVA.
David P Lavoie
Frequent Advisor

Re: EVA lun path question

Yes, controller A of the EVA. This particular LUN is presented with no preferred path and the managing controller is A. The funny thing is that windows hosts lun are presented the same way but uses the 2 controller with 2 fabric.
Solution

Re: EVA lun path question

David,

The way the EVA works, all read operations are proxied from the owning to non-owning controllers anyway, (that's why its called *asymmetric logical unit access or ALUA) so you actually gain very little by being able to use the other controller for a LUN - you gain slightly on writes, but not that much (writes have to be mirrored into the cache on both controllers anyway unless you specifically turn that feature off). IIRC the EVA DCM MPIO module for windows are actually able to send writes to the non-owning controller, but all reads are still sent to the owning controller - as this DCM was out before the 11iv3 stack was released, I can only assume that the IO folks on the HPUX team looked at the gain you get from sending writes to non-owning controllers and decided the minor gain in performance wasn't worth the trouble of coding in similar features.

So if you are *really* concerned about missing out on a bit of performance, then make sure that you use a volume manager to aggregate your IO across 2 LUNs which are owned by different contollers - this is nearly as true for windows as it is for HPUX.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
David P Lavoie
Frequent Advisor

Re: EVA lun path question

Thanks Duncan.

That's the kind of information I was looking for. It does make sense.

Thank you all for your commitment. This is surely a nice forum....