HPE EVA Storage
1825761 Members
2172 Online
109687 Solutions
New Discussion

[EVA5000, MPIO, LUN Share] Static Load balancing

 
Fabian_Nowee
Super Advisor

[EVA5000, MPIO, LUN Share] Static Load balancing

Hello,

We want to do the folowing:

10 nodes (each single hba)
1x brocade switch
4x 2TB disks
Lun 1 (prefered path A / Failback)
Lun 2 (prefered path B / Failback)
Lun 3 (prefered path A / Failback)
Lun 4 (prefered path B / Failback)
All Luns are presented among the nodes
LUN Share software to manage global volumes. (SanBolic)

Volume is striped across the 4 luns (vdisk). If i check the switch I only see one storage path active. how come?

I miss a static assignment to the port if assigning luns to a host.

(i worked only with HDS before :))

Is there also not a performance view on a eva (mb/s - IO/s)??

What gives the best performance? I have now 84 disks a diskgroup (the eva sizer recommends a diskgroup devideable by the amount of available diskshelves others say to hang on to a multiple of 8)
Looking for nice incentives? (www.kirsp.nl)
20 REPLIES 20
Steven Clementi
Honored Contributor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

Can you confirm the OS? YOu mention MPIO, which is usually related to Windows, but I just want to make sure.

Q1 Answer: You will always only see one path active from one particular LUN unless your using Secure Path, which... based on posts here in the forums, allows you to load balance one particular LUN across your HBA's. Since you only have one HBA in each server, there is not much you can do for host level load balancing.

Q2 Answer: The only performance values you can probably get to easily are through the fabric management web utilities. Your san switch should have a performance view. It is not as robust as somethinkg like OVSAM, which should give you decent performance views and logging and.... a slew of other options, but it works for a quick rough idea of whats happening.

Q3 Answer: HP claims that the more disks in a disk group... the better. There is a falloff though at some point. I was told that the falloff was at about 59 disks. After the 59th disk is added, the performance gain is NOT as it was while adding disk 9 through 58.
HP also recomennds that you have add disks in groups of 8, mainly to assist in RSS distribution. If your RSS state is good and level, the controllers do not have to do to much to get back to that state after the disks are added to the group.

If you understand how RSS works, then you would know why "8" is the magic number. I have seen cases where customers were told to add disks in multiples of their disk shelves... example..

You have 12 shelves. You should add in groups of 24. Why? 24/12 = 2 also.. 24/8 = 3. 24 disks is a multiple of 8 as well.

I always suggest multiples of 8, but also suggest to try and keep the shelves evenly filled. That is... the same amount of drives in each shelf (where possible).


Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

What i try to accomplish is that each lun goes over a different controller / port.

I understand the principle of lun ownership (same as with a HDS 9500). It should be possible to let you control the paths.

I See 4 target ID's in each node (4 eva ports)

I would like that the data would go over
Tid0 Lun 1
Tid2 Lun 2
Tid0(1) Lun 3
Tid2(3) Lun 4

problem is that all the 10 nodes take for all luns tid0 (C_A port 1)

a workaround should be that i make two zone's
and spread all nodes to it.

node1 > zoneA > C_A fp1 C_B fp1
node2 > zoneB > C_A fp2 C_B fp2
node3 > zoneA > C_A fp1 C_B fp1
etc...

We've got a dillema with our diskgroup. We have an EVA 2C6D (84 disks). We need at least 4x 2TB LUN''s

for optimal performance hp recommends to group in multiple of 8. This would result in 80. leaving 4 disks unused.


Looking for nice incentives? (www.kirsp.nl)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

Yes, I forgot to mention that the os is Windows, Xp to be precise. (we are also goning to try windows2003) (platformkit 3.0e)

All the nodes are xw8000 hp workstations
with a qlogic 2340 hba. (hp2414 ??)

I increased the execution throttle to 128

my read's are 120-140mb/s and writes are 50-70mb/s

are there any other tuning options for video enviroments?

the avg filesize = 12,5mb. (frame)
for realtime video we need 24x 12,5 = 300 (two hba's)

Thanks so far.
Looking for nice incentives? (www.kirsp.nl)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

Securepath is no option because it causes problems with other disksoftware solutions.

MPIO works how it should be, on the right level, only with "commercial" limitations.

(btw, it would be great if you can edit you messages)
Looking for nice incentives? (www.kirsp.nl)
Steven Clementi
Honored Contributor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

I did not realize MPIO worked with Windows XP. There must be a suupport restriction on it though. Windows 2003 would probably be the better choice for OS.

I understand your goal and do not think zoning will help out. Zoning would only help you restict which servers see which physical paths. I think therem ight be an easier solution though. Have you restarted the EVA since you created all the LUN's? If not, maybe you should give it a restart. I have seen that sometimes the preferred path setting doesn't take until after a restart of the EVA. If you can afford the restart, then I would do that as the first step in trying to resolve the issue with manual load balancing.

What size disks are you using? from the sounds of it, you need 8TB and can only accomplish that with 146GB disks basedo n the amount of disks you have. 80 146GB drives is @11.5TB raw, more that the 8TB you need. What is the rest of the storage being used for?

On the workstation side there are probably some settings you can change, but I think the EVA comes standard Full Throttle without many tweaking options like "amount of read/write cache" or "Ratio of R/W cache" or etc.

Which other disk software solutions are you using? and yes, it would be great to edit messages.


Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

Steven,

Currently the Eva is not yet in production. so it should be possible to reboot.
(we are going to try this next week, it would be great!!! if it really should work).

customer is in deeply need of capacity (even 8TB is too little)

MPIO works fine with Xp, also the platformkit works fine. (qlogic is delivering XP drivers)

Video Editing / rendering is mostly done on XP (or w2k Prof). The lack of Windows2003 is some missing multimedia parts (directX) it should be possible to mannualy add these components. Big advantage of 2003 is offcourse the memory management (workstations have 4GB)

SecurePath isn't very compatible to other disk management systems like LaScala (used to manage & stripe the luns). SP works on a different level as MPIO does.

The disk used are 146GB's. We have currently
4x 2TB and 2x 350GB.

Yes 80d would reach 4x 2TB.

but if you compare 84d to 80d, will this be a significant increase? for example would the write performance double :) ?

You loose 4x 146GB ~ 500GB, it's a lot.
Looking for nice incentives? (www.kirsp.nl)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

software used:

SanBolic (www.sanbolic.com)
- Melio (FileSystem)
- LaScala (diskmanagement to manage melio partitions)
Looking for nice incentives? (www.kirsp.nl)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

if I ungroup disks, can I do 4 at once, or do i need to do it one by one?

btw, the system is not so easy to reboot, there is some production on it.

isn't there a special command? via the cli?
Looking for nice incentives? (www.kirsp.nl)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

yes, the loadbalancing works!
We had to reboot the "enterprise" VA... to activate the prefered path option.
Looking for nice incentives? (www.kirsp.nl)
Steven Clementi
Honored Contributor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

Ungrouping disks is sometimes a pain. I do not think you can do multiples in one shot. You would need to do individuals.

I DO think that you can do them one after each other though, instead of waiting for the group to level after each removal.


Glad to hear that the load balancing worked.



Steven

Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

what is see at performance tests is that I get a constant write performance at about 160mb/s. If I do a read test I get a avg of 110mb/s. (20~180) (over one hba)

(tests are done with MTI.)

If i ungroup 4 disks, will this increase my read performance?

It's capacity against performance. if it's minimal increase of performance, the capacity need will win.

Looking for nice incentives? (www.kirsp.nl)
Uwe Zessin
Honored Contributor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

During writes, the array buffers incoming data in the writeback cache and can then write the data to the individual disks when it is convenient. On a read, the array must wait for the disks to deliver the data unless it is already in the cache.

What does your I/O pattern look like?
.
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

It's video editing. so a lot of rendering, color correction and also restoring of old historic movies (1890~1950).

Also a lot's of data movement, some data is comming in by portable USB2.0/firewire800 ATA disks and is sometimes also exported on the porable disks.
Looking for nice incentives? (www.kirsp.nl)
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

small additional question,

Is the read ahead function helpfull if you stripe across two luns?

Is it not creating overhead, because the luns are on their own not realy logical readable by the EVA because they consist of a splitted io c.q. data pattern.

how does the EVA Read Ahead?
Looking for nice incentives? (www.kirsp.nl)
Uwe Zessin
Honored Contributor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

If you stripe across two or more LUNs you _may_ make it harder for the EVA to detect sequential access, because there are less I/Os per LUN.

I do not understand the next sentence and I have not found a description of how the EVA tries to find out of it should do read-ahead - I doubt that algorithm will be published.
.
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

With the second question I meant what the are the critiria to read optimal ahead.

I'm going to try tommorow if I can disable the read-ahead fuction and can check what the impact is.
Looking for nice incentives? (www.kirsp.nl)
Sean Lee_2
Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

I suspect you won't be able to do what you want. I hope I'm wrong because I wanted to do something similar using Linux OS.
From my (limited) experience, I observed the same (on Linux). At any given time only one path is active, although failover works. In other words, I wasn't able to perform **dynamic** multipathing using standard Linux MPIO drivers.

Have you already decided to use the software you mention? With performance figures you mention, you should be able to make it simpler by using high performance NAS cluster and then have your clients (Win XP) use GbE with adapter teaming instead being connected directly to SAN.
Check out:
http://www.polyserve.com/sol_windows_file.html
It supports Secure Path on Windows
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

After some testing, i saw that the static balancing is not very consistent.

after a while, rebooting, rearanging lun sizes from 2TB to 8x 512, I see that the path are again running over 1 path. So I would to reboot again to activate the prefered path's.

I find this a anoying bug.... it's also anoying that i can't see at the console what the active (current) controller is.

ps Sanbolic is the chosen vendor for this customer. Polyserve is also a nice alternative.
Looking for nice incentives? (www.kirsp.nl)
Sean Lee_2
Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

I think read-ahead should be left on as it has no negative impact on performance (from Best Practices for EVA) - from the way they worded it, it seemed as if it reads ahread if it's got nothing else to do...

Performance stats - I found FC switches to be the best place to look in absence of higher level software. You can install MRTG for easier looking at that.

Look at Best Practices for EVA (search for "best practices pdf eva hp" without the quotes) and you'll find it.
Fabian_Nowee
Super Advisor

Re: [EVA5000, MPIO, LUN Share] Static Load balancing

I made a MRTG script to monitor all the 16 fibre ports. works fine, only if i do a MTI benchmark the counters don't match.

is it for a brocade3800 OID * 4 or OID * 8
Looking for nice incentives? (www.kirsp.nl)