Disk Enclosures
1748117 Members
3520 Online
108758 Solutions
New Discussion юеВ

Re: EVA6000 Performance

 
SOLVED
Go to solution
Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

LiPEnS,

Secondly here is the one with a business copy running in the bacground and the long running job running:

HP-UX wlux01 B.11.11 U 9000/800 01/24/06



19:01:39 device %busy avque r+w/s blks/s avwait avserv

19:01:44 c0t13d0 7.39 0.50 4 20 3.71 23.44

c0t3d0 1.20 0.50 3 33 2.64 6.66

c3t13d0 6.39 0.50 4 16 4.00 22.97

c3t3d0 1.20 0.50 2 32 2.84 6.80

c26t0d1 90.62 0.50 234 6111 5.21 16.19

c32t8d2 96.01 0.50 16 255 4.87 62.15

19:01:49 c0t13d0 2.79 0.50 4 21 4.25 12.42

c0t3d0 1.39 0.50 2 27 3.30 8.15

c3t13d0 2.59 0.50 3 17 4.63 14.44

c3t3d0 1.00 0.50 2 26 3.49 6.73

c26t0d1 98.80 0.50 82 2130 4.77 56.78

c27t0d2 0.20 0.50 1 11 4.80 0.32

c26t8d0 6.37 0.50 40 645 4.92 1.47

c33t8d3 45.82 0.50 27 430 5.11 16.78

c27t8d1 11.95 0.50 27 427 5.51 4.65

c32t8d2 37.05 0.50 12 194 5.07 34.39

19:01:54 c0t13d0 4.01 0.50 7 30 5.37 6.69

c0t3d0 2.00 0.50 4 44 2.85 9.46

c3t13d0 3.61 0.50 6 28 5.42 7.25

c3t3d0 1.60 0.50 3 41 2.82 7.21

c26t0d1 10.82 0.50 37 1008 5.10 17.08

c26t8d0 0.80 0.50 11 147 3.71 0.98

c32t8d2 99.80 0.50 12 196 4.98 80.88

19:01:59 c0t13d0 5.40 0.50 11 96 4.85 10.55

c0t3d0 1.40 0.50 3 30 2.66 8.05

c3t13d0 4.00 0.50 7 50 4.81 10.58

c3t3d0 1.00 0.50 2 29 2.76 7.42

c26t8d0 0.60 0.50 4 59 4.99 1.29

c33t8d3 0.80 0.50 1 10 4.80 9.92

c27t8d1 2.00 0.50 3 54 4.97 6.45

c32t8d2 98.80 0.50 10 160 6.25 102.85

19:02:04 c0t13d0 6.39 0.50 9 57 5.10 11.77

c0t3d0 19.96 0.50 34 109 4.99 7.48

c3t13d0 4.19 0.50 5 44 5.62 10.34

c3t3d0 14.97 0.50 33 103 5.06 5.66

c26t8d0 8.98 0.50 29 458 4.09 3.18

c33t8d3 28.14 0.50 25 406 5.02 11.25

c27t8d1 59.48 0.50 31 495 5.00 19.98

c32t8d2 9.98 0.50 3 45 3.58 27.18

19:02:09 c0t13d0 2.81 0.50 4 25 4.11 10.73

c0t3d0 1.80 0.50 4 48 3.42 7.34

c3t13d0 2.20 0.50 3 19 4.22 11.46

c3t3d0 1.40 0.50 3 45 3.45 6.48

c26t8d0 0.20 0.50 3 55 5.75 0.55

c33t8d3 3.41 0.50 0 6 6.48 80.76

c27t8d1 27.66 0.50 13 215 5.45 23.59

c32t8d2 87.58 0.50 6 103 6.22 137.43

19:02:14 c0t13d0 3.59 0.50 5 32 4.43 14.14

c0t3d0 0.20 0.50 1 16 2.81 5.25

c3t13d0 2.59 0.50 4 24 5.03 9.00

c3t3d0 0.20 0.50 1 16 2.82 5.87

c26t8d0 1.20 0.50 1 8 6.07 18.58

c32t8d2 99.40 0.50 12 195 4.84 78.62

19:02:19 c0t13d0 6.00 0.50 9 66 3.70 20.91

c0t3d0 1.40 0.50 3 37 3.39 6.61

c3t13d0 4.20 0.50 7 58 3.66 16.29

c3t3d0 0.80 0.50 2 35 3.32 7.11

c27t0d2 0.20 0.50 1 19 5.28 0.26

c26t8d0 9.60 0.50 13 214 5.50 6.96

c33t8d3 18.20 0.50 26 410 4.55 7.22

c27t8d1 0.40 0.50 9 154 4.84 0.25

c32t8d2 69.20 0.50 9 150 4.72 79.88

19:02:24 c0t13d0 3.60 0.50 5 29 3.52 11.05

c0t3d0 0.60 0.50 2 32 2.55 6.16

c3t13d0 3.20 0.50 4 22 3.74 13.03

c3t3d0 0.40 0.50 2 32 2.57 5.23

c26t8d0 13.00 0.50 13 211 4.80 10.38

c27t8d1 52.60 0.50 34 557 4.81 15.66

c32t8d2 31.20 0.50 3 42 7.09 118.40

19:02:29 c0t13d0 6.81 0.50 10 88 4.09 15.09

c0t3d0 2.00 0.50 3 25 5.11 12.18

c3t13d0 4.01 0.50 7 77 4.60 12.56

c3t3d0 1.00 0.50 2 22 4.95 7.60

c27t0d2 0.20 0.50 1 10 4.15 0.27

c27t8d1 1.60 0.50 7 119 3.78 2.60

c32t8d2 100.00 0.50 2 38 5.72 411.39



Average c0t13d0 4.88 0.50 7 46 4.38 13.60

Average c0t3d0 3.20 0.50 6 40 4.29 7.71

Average c3t13d0 3.70 0.50 5 35 4.61 12.47

Average c3t3d0 2.36 0.50 5 38 4.37 6.09

Average c26t0d1 20.07 0.50 35 926 5.10 25.69

Average c32t8d2 72.89 0.50 9 138 5.20 85.73

Average c27t0d2 0.06 0.50 0 4 4.87 0.28

Average c26t8d0 4.08 0.50 11 180 4.69 3.58

Average c33t8d3 9.66 0.50 8 126 4.91 12.18

Average c27t8d1 15.57 0.50 12 202 5.02 13.05

Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

Here's the rest of the info you wanted LiPEnS:

SCSICTL:
/dev/rdsk/c26t8d0
immediate_report = 0; queue_depth = 128
/dev/rdsk/c33t8d3
immediate_report = 0; queue_depth = 128
/dev/rdsk/c27t8d1
immediate_report = 0; queue_depth = 128
/dev/rdsk/c32t8d2
immediate_report = 0; queue_depth = 128
wlux01 / #


VXTUNEFS:
Filesystem i/o parameters for /data/ALPHA/dbs01
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 8192
Filesystem i/o parameters for /data/ALPHA/dbs02
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 8192

and so on up to 16 FS

MOUNT -V (for the database file systems)
/dev/vgalpha/alp01 on /data/ALPHA/dbs01 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:36 2006
/dev/vgalpha/alp02 on /data/ALPHA/dbs02 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:41 2006
/dev/vgalpha/alp03 on /data/ALPHA/dbs03 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:46 2006
/dev/vgalpha/alp04 on /data/ALPHA/dbs04 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:49 2006
/dev/vgalpha/alp05 on /data/ALPHA/dbs05 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:51 2006
/dev/vgalpha/alp06 on /data/ALPHA/dbs06 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:54 2006
/dev/vgalpha/alp07 on /data/ALPHA/dbs07 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:56 2006
/dev/vgalpha/alp08 on /data/ALPHA/dbs08 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:58 2006
/dev/vgalpha/alp09 on /data/ALPHA/dbs09 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:01 2006
/dev/vgalpha/alp10 on /data/ALPHA/dbs10 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:07 2006
/dev/vgalpha/alp11 on /data/ALPHA/dbs11 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:12 2006
/dev/vgalpha/alp12 on /data/ALPHA/dbs12 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:14 2006
/dev/vgalpha/alp13 on /data/ALPHA/dbs13 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:17 2006
/dev/vgalpha/alp14 on /data/ALPHA/dbs14 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:19 2006
/dev/vgalpha/alp15 on /data/ALPHA/dbs15 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:21 2006
/dev/vgalpha/alp16 on /data/ALPHA/dbs16 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:23 2006


SWLIST:
wlux01 / # swlist | grep -i online
B3929CA B.11.11 HP OnLineJFS




The database is on filesystems

Yes the 4 luns belong to the same VG and all of the LVs are striped across them

we are on VRAID5 on the EVA

Command view is version 4.1.1.3 and we are on XCS revision 5030.
LiPEnS
Valued Contributor

Re: EVA6000 Performance

Time onto conclusions
Thing which it was been possible to make in system (on host), which sure they will improve bit performance:
1) Change following parameters of file systems (vxfs) in accordance with recommendations (http://docs.hp.com/en/B3929-90011/ch05s07.html and http://h71028.www7.hp.com/ERC/downloads/4AA0-2030ENW.pdf) - in this case: read_pref_io=1024Kb, read_nstream=4, write_nstream=1. Make this online across vxtunefs command or offline across file /etc/vx/tunefstab.
2)Change mount options for oracle file systems (OnLineJFS options) in accordance with recommendations http://h21007.www2.hp.com/dspp/files/unprotected/database/HP3KOracle.ppt. This sure improves the buffers utilization.
3)One should to exchange place of some files (.dbf .ctl ...) between file systems (lv and pv)so to discs were similarly busy. On present results one disc c32t8d2 is busy.

Thing which it was been possible to make EVA and methodology, which sure they will improve performance:
4) To minimize impact of snapclone performance on system performance, transition cache on source virtual disks to write-through mode using Command View EVA before starting the snapshot or snapclone. The impact of creating one snapshot may be tolerable and may not require this action.
However, creation of multiple snapshots of the same virtual disk requires this action (Doc best practices for EVA - white paper).
5)If you are also taking a backup at the same time, you may want to move it until later. The snapclone actually does a snapshot until the snapclone process has completed (copying data over).
6) Take EVAPerf and analyse situations during snap clone and without.

Before of any changes execute:
#timex dd if=/data/ALPHA/dbs02/some_big_file of=/dev/null bs=4096k count=10000&
#sar -d 5 10 > before.log
#sar -b 5 10 > before.log
and after changes make the same to compare results.

Regards LiPEnS
Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

Hi LiPEnS

Thanks for that, I couldn't find any of those documents when I was searching for them over the last couple of days.

I'm reading through them now but I cant find anywhere that says what vxfs parameters to use. You said to use

read_pref_io=1024Kb, read_nstream=4, write_nstream=1.

but I can't find references to those in any of the documents. Is this from personal experience?

I'm looking at making the mount option changes later this afternoon.

Once again, thanks for all your help on this.

Chris.
LiPEnS
Valued Contributor
Solution

Re: EVA6000 Performance

Hi
Parameters of vxfs file systems describes this document http://docs.hp.com/en/B3929-90011/ch05s07.html
"Try to align the parameters to match the geometry of the logical disk. With striping or RAID-5, it is common to set read_pref_io to the stripe unit size and read_nstream to the number of columns in the stripe. For striping arrays, use the same values for write_pref_io and write_nstream, but for RAID-5 arrays, set write_pref_io to the full stripe size and write_nstream to 1."
In this case "stripe unit size" = 1024Kb so read_pref_io=1024Kb, "number of columns in the stripe" = 4 (number of disks in stripe groupe) so read_nstream=4 and "write_nstream to 1" so write_nstream=1.
This is also knowledge get on courses and my personal experience.
For improvement of efficiency it can be also most important to change place of some files (.dbf .ctl ...) between file systems (lv -> pv) so to discs were similarly busy. On present results one disc c32t8d2 is busy.

Please write what did you change and whether something improved??

Regard
Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

LiPEnS,

Yes, the changes to remove CONVOSYNC=DIRECT and to add NODATAINLOG did speed things up.

The job that used to take 15 minutes is now only taking 30 minutes and not over an hour.

That has given me enough of a performance increase to buy me some time to test out the other settings.

I'll try the vxfs tuning next

Once again thanks for your help.

Chris.