Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

Maximum read performance using 2x EVA3000

Darius Rudzika
Occasional Advisor

Maximum read performance using 2x EVA3000

Hello,
here's the situation:
1x HP-UX 11.00 box with 2x HBA connected to SAN with 2x EVA3000. On both EVA3000 I've configured two LUN's of the same size and presented them throught different preferred paths to utilize all 4 controllers.
On a host side all LUN's are located in the same VG, and logical volumes are striped acros all 4 LUN's.
My application is is very read intensive:
4-7 simultaniuos large (50GB) sequential reads are performed usualy. Application reads files in size ~20-50 mb in 256bytes chunks to 4MB buffer.
I assume two possible RAID configurations on EVA's - vRAID1 or vRAID5. Whitch one is to go for?
The second problem is the LVM configuration on host.
VG is configured with 8MB PE's and lvols are created with "lvcreate -i 4 -I 1024 -r N"
FS is VxFS.
I assume that VxFS block size also could be tuned. Also the buffer cache - currently i've set it to dynamic 5-20% of 12GB, I dont think that the best way, but thats how it is at the moment.

So what are your thougts about best configuration in such case?

Br,
Darius

P.S. i'm sorry for my trashy topics below
"In pure practice - everything woks, but nothing clear. In theory - everything clear, but nothing works. In most favorable case when theory meets practice - nothing works and nothing clear"
4 REPLIES
Leif Halvarsson_2
Honored Contributor

Re: Maximum read performance using 2x EVA3000

Hi,

I have done some performance tests with EVA 3000 but, my test was perhaps more random acess (I/O-rate)then sequential (transfer rate).

I dont think you vill get better performancer with Vraid1 then with Vraid5 (I actually get marginally better performance with Vraid5). Vraid1 differs from "ordinary" RAID 10 so you can't compare with other RAID systems. As I have ungerstand, the "real" advantage with Vraid1 is slightly better security then Vraid5.
Mike Naime
Honored Contributor

Re: Maximum read performance using 2x EVA3000

Your best performance would have been from an EVA5000 instead of 2x EVA3000.

What is your config on the EVA3000? How many shelves do you have? How many disks per shelf?

Your bottleneck is most likely to be in getting your data from the spindles. Not pulling it from the controller. First of all you have an arbitrated loop on the 3000, not a switched network. Secondly, the RSS groups that the EVA creates using 8 (if multiples of 8 disks are available) spindles are best when spread across 8 different shelves. You cannot get this 8 shelf configuration from an EVA3000. You need the 5000 to get 8 or more shelves. We have 18 shelves of drives behind our EVA 5000's.

We recently moved some data from HSG's to EVA's. We transfered 85GB/hour/cluster member. The disk que depth on the HSG's was 130. The Que depth on the EVA drives was 1!
VMS SAN mechanic
Orrin
Valued Contributor

Re: Maximum read performance using 2x EVA3000

Hi Darius,

I am assuming you have 2 EVA3000, with 2 shleves and 8 disk each.

If that is the case, then you will get better performance with reads on vraid1.

The LVm and VxFS configuration, will matter little, because of the manner in which the EVA's handle IO to disk.

Basically your HSV110 controllers have a mirrored cache, and the HSV110 will write to ar read from disk once this cache is full. So sequential or random io becomes irrelevant.

What I'm trying to say is, if you don't have problems with space, go for vraid1, in the other case you can go for vraid5, which will give you a larger disk space to play with. Either way the performance hit is minimal.

You could also check the guides, and these suggestions will get you better performance.

http://h200001.www2.hp.com/bc/docs/support/UCR/SupportManual/TPM_ek-evabp-aa-b01/TPM_ek-evabp-aa-b01.pdf

Hope this helps,
Regards,
Orrin
Darius Rudzika
Occasional Advisor

Re: Maximum read performance using 2x EVA3000

there's a default configuration at the moment: 2x EVA3000, with 2 shelves and 8 x 36GB 15K rpm disk each.
I'm doing striping via LVM so I dont think stripes inside vRAID5 will do any improvements. Can anything be improved by LVM stripe size ? Maybe controllers read cache hit ratio or amount of data transfered during 1 i/o ?
"In pure practice - everything woks, but nothing clear. In theory - everything clear, but nothing works. In most favorable case when theory meets practice - nothing works and nothing clear"