Operating System - HP-UX
1834077 Members
2175 Online
110063 Solutions
New Discussion

Re: Maximum read performance using 2x EVA3000

 
SOLVED
Go to solution
Darius Rudzika
Occasional Advisor

Maximum read performance using 2x EVA3000

Hello,

i'm crosposting it from the storage forum, but it also has LVM part.

so here's the situation:
1x HP-UX 11.00 box with 2x HBA connected to SAN with 2x EVA3000. On both EVA3000 I've configured two LUN's of the same size and presented them throught different preferred paths to utilize all 4 controllers.with SecurePath's load balancing.
On a host side all LUN's are located in the same VG, and logical volumes are striped acros all 4 LUN's.
My application is is very read intensive:
4-7 simultaniuos large (50GB) sequential reads are performed usualy. Application reads files in size ~20-50 mb in 256bytes chunks to 4MB buffer.
I assume two possible RAID configurations on EVA's - vRAID1 or vRAID5. Whitch one is to go for?
The second problem is the LVM configuration on host.
VG is configured with 8MB PE's and lvols are created with "lvcreate -i 4 -I 1024 -r N"
FS is VxFS.
I assume that VxFS block size also could be tuned. Also the buffer cache - currently i've set it to dynamic 5-20% of 12GB, I dont think that the best way, but thats how it is at the moment.

So what are your thougts about best configuration in such case?

Br,
Darius
"In pure practice - everything woks, but nothing clear. In theory - everything clear, but nothing works. In most favorable case when theory meets practice - nothing works and nothing clear"
1 REPLY 1
Curtis Wheatley_2
Occasional Advisor
Solution

Re: Maximum read performance using 2x EVA3000

A vraid1 configuration is going to give you better read io performance. With securepath installed & the relativlely small size of your data set I don't think you will see much a difference with vraid1 or vraid5 luns. The best practices are to match you fs block size to you applications block size and so on down to the array level. You could decrease dbc_max_pct a bit but 20% is okay unless you want to free more memory for other OS processes vs buffer cache. You could utilize the sar -d command to collect stats with a vraid1 & vraid5 configuration to compare. You will likely see very little differecnce in the numbers sar reports with this small dataset. Are you having a problem with IO performance?

Regards,
Curtis M. Wheatley
Skilled workers are always in need