Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

EVA8100 lousy read performance

EVA8100 lousy read performance

I have an EVA8100 with 80 1 TB disks organized in two disk groups. On one of the disk groups I have 1 1TB RAID 0 LUN and 11 2 TB RAID 5 LUN on the second disk group I have 3 50 GB RAID0 LUN and 12 2 TB RAID 5 LUN.
This EVA is connected to Linux and Windows 2003 servers with Qlogic QMH2462 HBA. The LUNS show very different read performance with dd. The performance is generally twice better on the frist disk group although all 2 TB luns were create identically. We use the HP Qlogic driver with failover and load balancing enabled. Am I missing something?

for i in `lssd | awk '{print $1}'`; do echo $i; ./lmdd.linux if=/dev/$i of=/dev/null bs=4M count=2000; done

sda

8000.0000 MB in 88.1608 secs, 90.7433 MB/sec

sdb

8000.0000 MB in 38.6719 secs, 206.8687 MB/sec

sdc

8000.0000 MB in 43.1963 secs, 185.2011 MB/sec

sdd

8000.0000 MB in 43.2108 secs, 185.1389 MB/sec

sde

8000.0000 MB in 43.2491 secs, 184.9751 MB/sec

sdf

8000.0000 MB in 43.8267 secs, 182.5370 MB/sec

sdg

8000.0000 MB in 43.7166 secs, 182.9968 MB/sec

sdh

8000.0000 MB in 43.2994 secs, 184.7599 MB/sec

sdi

8000.0000 MB in 43.1783 secs, 185.2783 MB/sec

sdj

8000.0000 MB in 43.4021 secs, 184.3228 MB/sec

sdk

8000.0000 MB in 68.5391 secs, 116.7217 MB/sec

sdl

8000.0000 MB in 75.1867 secs, 106.4019 MB/sec

sdm

8000.0000 MB in 89.2745 secs, 89.6113 MB/sec

sdn

8000.0000 MB in 84.0652 secs, 95.1642 MB/sec

sdo

8000.0000 MB in 83.5998 secs, 95.6940 MB/sec

sdp

8000.0000 MB in 84.1919 secs, 95.0210 MB/sec

sdq

8000.0000 MB in 84.7010 secs, 94.4499 MB/sec

sdr

8000.0000 MB in 85.4065 secs, 93.6697 MB/sec

sds

8000.0000 MB in 85.0728 secs, 94.0371 MB/sec

sdt

8000.0000 MB in 80.7400 secs, 99.0835 MB/sec

sdu

8000.0000 MB in 78.8831 secs, 101.4159 MB/sec

sdv

8000.0000 MB in 77.6879 secs, 102.9762 MB/sec

sdw

8000.0000 MB in 77.6395 secs, 103.0404 MB/sec

sdx

8000.0000 MB in 78.2907 secs, 102.1832 MB/sec

4 REPLIES
IBaltay
Honored Contributor

Re: EVA8100 lousy read performance

Hi, are you using the driver load balancing?
are you using the set preferred path EVA option to loadbalance the EVA controllers?
Are you using the host LVM (striping)?
the pain is one part of the reality

Re: EVA8100 lousy read performance

Hello

Yes we use the following parameters in /etc/modprobe.conf

alias scsi_hostadapter cciss
alias scsi_hostadapter1 qla2xxx_conf
alias scsi_hostadapter2 qla2xxx
alias scsi_hostadapter3 qla2300
alias scsi_hostadapter4 qla2400
options qla2xxx ql2xmaxqdepth=16 qlport_down_retry=30 ql2xloginretrycount=30 ql2xfailover=1 ql2xlbType=1 ql2xautorestore=0xa ConfigRequired=0
remove qla2xxx /sbin/modprobe -r --first-time --ignore-remove qla2xxx && { /sbin/modprobe -r --ignore-remove qla2xxx_conf; }

cat /etc/hp_qla2x00.conf
qdepth = 16
port_down_retry_count = 30
login_retry_count = 30
failover = 1
load_balancing = 1
auto_restore = 0xa
auto_compile = n
config_required = 0

We made two tests: one with no preferred path and one with odd luns on one controller and even luns on the other with failover/failback but we had strange results in both cases. We don't plan to use LVM but a third party product.

Re: EVA8100 lousy read performance

inserting elevator=noop doubles the read performance because the standard linux i/o scheduler trashes the prefetching algorythm of the EVA8100. This parameter should be mandatory in every linux box attached on an EVA but HP makes no mention of it

Re: EVA8100 lousy read performance

The answer is


# echo > /sys/block//queue/scheduler


or edit /boot/grub/grub.conf and add elevator=noop