HPE EVA Storage
1752319 Members
5766 Online
108786 Solutions
New Discussion юеВ

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

 
SAKET_5
Honored Contributor

Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

Large ESX 3.5U4 farm - number of EVA8400 Vdisks presented. All Vdisks presented to the ESX farm is also replicated via CA.

My thoughts on how to load balance the Vdisks across controllers are as below:
1. Put Vdisks in DR Groups - determine the controller designation for the DR group in question.
2. Now for each Vdisk in a DR group as they will all be assigned to a controller, set their preferred ownership to match the CA designated controller (Path A/B Failover/Failback) for the DR group. Set the paths within VMware to correspond to the different hosts ports on the CA designated controller and use "MRU" as the path policy.

My expectation is that with this scheme, the Vdisk can be "relatively" load balanced across both controllers even with CA in place. Can anyone confirm/deny?
7 REPLIES 7
SAKET_5
Honored Contributor

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

Intend to have "round robin" with Vsphere with similar EVA Vdisks config as ESX3.5. Confirm/deny?
Uwe Zessin
Honored Contributor

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

ESX 3.5 is not ALUA-aware and last time I checked, round-robin was an 'experimental' feature - use fixed path policy instead to hit the correct controller/port.
.
ASIC SAN Admins
New Member

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

Hi Uwe,

As per my posts, what I meant was that I intend to have "round robin" with Vsphere 4 not with ESX3.5 given it doesn't support it.

Would you concur with the path assignment on 3.5 though in light of CA replication, etc.?
Uwe Zessin
Honored Contributor

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

Oh, sorry. I've missed the "vSphere" when you were talking about round-robin.
Yes, RR is the way to go on ESX4. Set the path preferrences in CV-EVA according to your needs and turn on RR for all datastores on all ESX servers.

You could also establish that as a default for all future LUNs (or before the next reboot), but it might not be a good idea if you run different array with different path policy needs:
# esxcli nmp satp setdefaultpsp --satp VMW_SATP_ALUA --psp VMW_PSP_RR

There is also a whitepaper that suggest to do RR with an IOPS value of one.

# esxcli nmp round robin setconfig --type "iops" --iops 1 --device naa.xxxxxxxxx

I haven't used it myself as I've read there is(was?) a bug that cause the value go to a _very_ high number after reboot so you had to execute it again.


As I don't run any EVAs in production myself I avoid doing too many customizations in customer environments unless it is really necessary / efficient.
.
SAKET_5
Honored Contributor

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

Uwe, interesting observations from our last vSphere upgrade:

1. All odd numbered LUNs are assigned to Controller A Path Failover/Failback.

2. All even numbered LUNs are assigned to Controller B Path Failover/Failback.

3. All datastores on vSphere have the RR configured - 4 paths as expected are available for each LUN with vSphere - 2 Active (I/O) and other 2 Active (i suspect in standby mode).

4. EVAperf reports that the vast majority over 99% of I/Os hit FP1 & FP2 of Controller A while the other FPs sit idle.

5. Further investigation of paths for the datastore lead to an interesting observation in that for each LUN, the 2 Active (I/O) paths always have the target WWPN of FP1 & FP2 of Controller A.

6. Back to CV-EVA and find that for all LUNs which were assigned "Path B Failover/Failback" have their setting intact, however the XCS has overridden this setting and the actual managing controller is "Controller A" in line with evaperf outputs.

It seems that VMware even with RR is hitting all LUNs via a controller (thankfully via both its host ports) in which case the EVA XCS is performing an intrinsic ownership transfer of LUNs designated to be owned by Controller B.

Any comments/experiences are most welcome.
SAKET_5
Honored Contributor

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

Any comments anyone - have a HP case on but haven't hit any major progress yet.
Uwe Zessin
Honored Contributor

Re: Best practices on load balancing Vdisks across both EVA Controllers on ESX 3.5

Ah, I knew I've forgotten something ;-)


Is the EVA you've described a 4-hostport or an 8-port one?
CA involved?


3. That is what I see, too.

4. Did you also check the traffic on the FC-switches to rule out an EVAperf bug?


ESX _should_ follow the preferrence setting made by CV-EVA. I have not made experiments myself, but I did see it working in a demo last year.

You have checked that the Hosts OS selection is "VMware"?
.