Operating System - HP-UX
1847196 Members
4629 Online
110263 Solutions
New Discussion

sar -d 100% Utilization disk33 eva san disk

 
Ron Marchak_1
Advisor

sar -d 100% Utilization disk33 eva san disk

pinto #> sar -d 5 10

HP-UX pinto B.11.31 U ia64 05/24/10

11:04:54 device %busy avque r+w/s blks/s avwait avserv
11:04:59 disk2 1.80 0.50 2 29 0.00 9.01
disk3 2.00 0.50 3 32 0.00 7.69
disk32 41.12 0.50 179 1436 0.00 2.31
disk33 100.00 0.50 155 1249 0.00 6.78
11:05:04 disk2 62.12 18.67 114 1564 108.66 31.36
disk3 63.13 20.70 106 1426 122.17 36.08
disk32 78.36 0.50 314 2324 0.00 2.66
disk33 84.17 0.50 153 1225 0.00 5.55
11:05:09 disk2 4.20 0.50 5 55 0.00 8.82
disk3 6.80 0.50 8 66 0.00 10.55
disk32 76.00 0.50 381 3052 0.00 2.00
disk33 100.00 0.50 289 2315 0.00 4.35
disk37 0.60 0.50 1 2 0.00 11.08

Also avwait higher than avserve on disk2/3, io bottleneck?
We have no problems, just opportunities !
16 REPLIES 16
Tingli
Esteemed Contributor

Re: sar -d 100% Utilization disk33 eva san disk

Although the %busy is high, yet disk33 is low in avque (<3) and avwait
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

Disk33 has informix database/datafiles running on it and running slow. Any suggestions on how to improve performance? This disk should not be sitting at 100%, something is botlenecked.
We have no problems, just opportunities !

Re: sar -d 100% Utilization disk33 eva san disk

Don't worry about 100% utilisation on a SAN disk - it's pretty meaningless - all it tells you is that during the interval, that disk was doing IO for 100% of the time (not necessarily a problem for an EVA LUN which could be backed by between 8 and over 200 physical spindles...)

I'd look first at what you see on disk2/3 - what do you have on those disks?

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

14:07:10 device %busy avque r+w/s blks/s avwait avserv
14:07:15 disk2 8.60 0.50 26 1185 0.00 7.16
disk3 17.20 0.50 48 1413 0.00 7.58
disk32 100.00 0.50 1343 29894 0.00 1.97
disk33 69.60 0.51 464 23280 0.92 5.02
disk34 0.20 0.50 0 3 0.00 6.54
disk37 0.20 0.50 1 6 0.00 1.68
14:07:20 disk2 5.39 0.50 18 1499 0.00 6.44
disk3 7.58 0.50 18 1464 0.00 12.18
disk32 100.00 0.50 1280 18766 0.00 2.02
disk33 72.26 0.50 315 8642 0.00 2.76
14:07:25 disk2 4.80 0.93 17 1714 0.51 6.22
disk3 9.60 0.79 35 1863 0.74 5.65
disk32 100.00 0.50 819 8880 0.00 2.37
disk33 66.00 0.50 267 9010 0.00 2.83
14:07:30 disk2 10.22 1.65 32 2895 2.20 8.21
disk3 16.83 1.40 43 3013 2.15 8.44
disk32 100.00 0.50 1148 10737 0.00 1.28
disk33 90.58 0.50 518 10609 0.00 2.34
disk34 2.00 0.50 7 495 0.00 4.98
14:07:35 disk2 6.97 4.04 41 4870 3.80 6.00
disk3 12.95 3.87 57 5090 5.20 7.42
disk32 100.00 0.50 919 10743 0.00 1.69
disk33 72.71 0.50 485 12677 0.00 1.78
disk34 0.60 0.50 1 5 0.00 11.11
14:07:40 disk2 7.62 4.02 30 3315 4.48 7.56
disk3 12.02 4.71 38 3384 6.19 8.95
disk32 100.00 0.50 975 16370 0.00 1.66
disk33 85.97 0.50 481 11111 0.00 2.11
disk34 6.21 0.50 12 533 0.00 9.22
14:07:45 disk2 5.40 3.75 36 4338 4.95 6.41
disk3 9.80 3.39 57 4586 4.21 6.06
disk32 100.00 0.50 1188 19468 0.00 1.71
disk33 81.80 0.50 480 4822 0.00 1.79
disk34 9.20 0.50 38 302 0.00 2.44
disk37 0.20 0.50 0 1 0.00 3.92
14:07:50 disk2 6.19 2.81 24 2751 6.22 9.02
disk3 8.38 3.42 27 2764 5.03 9.01
disk32 100.00 0.50 1042 18801 0.00 2.17
disk33 73.25 0.50 455 3748 0.00 1.68
disk34 0.80 0.50 0 3 0.00 19.04


Disk 2/3 is the root disk and its mirror. Not sure what was causing it, but it has not shown up since.

The informix folks are complaining disk io is slow. How about some suggestions on how to imporve i/o performance?

We have no problems, just opportunities !
Tingli
Esteemed Contributor

Re: sar -d 100% Utilization disk33 eva san disk

Looks like disk32 and disk33 are on the same boat. Maybe infomix has some issue with itself, such as resetting the buffer size, location or the tables might help (just a wild guess)
S. Ney
Trusted Contributor

Re: sar -d 100% Utilization disk33 eva san disk

You can isolate even further with some of the extended sar commands. Did the problem just start recently?
Sar -L
sar -H
sar -R -d
iostat -L
ioscan -P health
depending on your 11.31 release date:
ioscan â P ms_scan_time
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

#> sar -d 5 10

15:04:45 disk2 1.80 0.50 2 54 0.00 9.82
disk3 2.00 0.50 3 61 0.00 9.79
disk32 100.00 0.50 632 6259 0.00 2.20
disk33 78.96 0.50 287 4582 0.00 2.80

#> sar -L 5 10

15:05:43 lunpath %busy avque r/s w/s blks/s avwait avserv
%age num num num num msec msec
15:05:48 disk33_lunpath8 48.60 0.50 119 218 4998 0.00 1.55
disk32_lunpath6 24.20 0.50 95 122 1848 0.00 1.22
disk33_lunpath22 48.20 0.50 120 218 4914 0.00 1.53
disk32_lunpath20 30.60 0.50 110 108 2012 0.00 1.51
disk2_lunpath0 2.60 0.50 0 4 66 0.00 17.08
disk3_lunpath1 2.60 0.50 0 4 66 0.00 17.94

#> sar -H 5 10

15:07:04 ctlr util t-put IO/s r/s w/s read write avque avwait avserv
%age MB/s num num num MB/s MB/s num msec msec
15:07:09 sasd1 2 0.05 4 0 4 0.00 0.05 1 0 8
fcd0 71 5.46 755 269 486 2.07 3.39 1 0 1
fcd1 64 5.40 755 264 492 1.94 3.45 1 0 1

#> sar -R -d 5 10

15:07:59 device %busy avque r/s w/s blks/s avwait avserv
15:08:04 disk2 5.59 0.50 0 12 43 0.00 4.84
disk3 4.99 0.50 1 12 48 0.00 4.08
disk32 98.80 0.50 416 635 9478 0.00 1.06
disk33 83.43 0.50 280 485 6160 0.00 1.09

#> iostat -L

lunpath bps sps msps

disk33_lunpath8 3202 186.0 1.0
disk35_lunpath10 10 0.2 1.0
disk36_lunpath11 0 0.0 1.0
disk37_lunpath12 7 0.2 1.0
disk32_lunpath6 2141 94.8 1.0
disk34_lunpath14 3116 156.7 1.0
disk43_lunpath34 0 0.0 1.0
disk33_lunpath22 3194 186.0 1.0
disk35_lunpath24 10 0.2 1.0
disk36_lunpath25 0 0.0 1.0
disk37_lunpath26 7 0.2 1.0
disk32_lunpath20 2140 94.8 1.0
disk34_lunpath29 3110 156.7 1.0
disk43_lunpath36 0 0.0 1.0
disk2_lunpath0 589 13.1 1.0
disk3_lunpath1 654 17.5 1.0

We have no problems, just opportunities !
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

#> ioscan -P health
Class I H/W Path health
=======================================
root 0 N/A
ioa 0 0 N/A
ba 0 0/0 N/A
tty 0 0/0/1/0 N/A
tty 1 0/0/1/1 N/A
tty 2 0/0/1/2 N/A
usb 0 0/0/2/0 N/A
usbcomp 0 0/0/2/0.1 N/A
usbhid 0 0/0/2/0.1.0 N/A
usbhid 1 0/0/2/0.1.1 N/A
usb 1 0/0/2/1 N/A
usb 2 0/0/2/2 N/A
graphics 0 0/0/4/0 N/A
ba 1 0/1 N/A
lan 0 0/1/1/0 online
lan 1 0/1/1/1 online
ba 2 0/2 N/A
escsi_ctlr 1 0/2/1/0 online
tgtpath 0 0/2/1/0.0x5000c50016c16379 online
lunpath 0 0/2/1/0.0x5000c50016c16379.0x0 online
tgtpath 1 0/2/1/0.0x5000c50016c23ea5 online
lunpath 1 0/2/1/0.0x5000c50016c23ea5.0x0 online
lan 2 0/2/2/0 online
lan 3 0/2/2/1 online
ba 3 0/3 N/A
ba 4 0/3/0/0 N/A
slot 0 0/3/0/0/0 N/A
fc 0 0/3/0/0/0/0 online
tgtpath 2 0/3/0/0/0/0.0x5001438004c673f8 online
lunpath 2 0/3/0/0/0/0.0x5001438004c673f8.0x0 online
lunpath 7 0/3/0/0/0/0.0x5001438004c673f8.0x4001000000000000 standby
lunpath 8 0/3/0/0/0/0.0x5001438004c673f8.0x4002000000000000 online
lunpath 9 0/3/0/0/0/0.0x5001438004c673f8.0x4003000000000000 standby
lunpath 10 0/3/0/0/0/0.0x5001438004c673f8.0x4004000000000000 online
lunpath 11 0/3/0/0/0/0.0x5001438004c673f8.0x4005000000000000 online
lunpath 12 0/3/0/0/0/0.0x5001438004c673f8.0x4006000000000000 online
lunpath 15 0/3/0/0/0/0.0x5001438004c673f8.0x4007000000000000 standby
lunpath 35 0/3/0/0/0/0.0x5001438004c673f8.0x4008000000000000 standby
tgtpath 3 0/3/0/0/0/0.0x5001438004c673fc online
lunpath 3 0/3/0/0/0/0.0x5001438004c673fc.0x0 online
lunpath 6 0/3/0/0/0/0.0x5001438004c673fc.0x4001000000000000 online
lunpath 13 0/3/0/0/0/0.0x5001438004c673fc.0x4002000000000000 standby
lunpath 14 0/3/0/0/0/0.0x5001438004c673fc.0x4003000000000000 online
lunpath 16 0/3/0/0/0/0.0x5001438004c673fc.0x4004000000000000 standby
lunpath 17 0/3/0/0/0/0.0x5001438004c673fc.0x4005000000000000 standby
lunpath 18 0/3/0/0/0/0.0x5001438004c673fc.0x4006000000000000 standby
lunpath 19 0/3/0/0/0/0.0x5001438004c673fc.0x4007000000000000 online
lunpath 34 0/3/0/0/0/0.0x5001438004c673fc.0x4008000000000000 online
fc 1 0/3/0/0/0/1 online
tgtpath 4 0/3/0/0/0/1.0x5001438004c673f9 online
lunpath 4 0/3/0/0/0/1.0x5001438004c673f9.0x0 online
lunpath 21 0/3/0/0/0/1.0x5001438004c673f9.0x4001000000000000 standby
lunpath 22 0/3/0/0/0/1.0x5001438004c673f9.0x4002000000000000 online
lunpath 23 0/3/0/0/0/1.0x5001438004c673f9.0x4003000000000000 standby
lunpath 24 0/3/0/0/0/1.0x5001438004c673f9.0x4004000000000000 online
lunpath 25 0/3/0/0/0/1.0x5001438004c673f9.0x4005000000000000 online
lunpath 26 0/3/0/0/0/1.0x5001438004c673f9.0x4006000000000000 online
lunpath 27 0/3/0/0/0/1.0x5001438004c673f9.0x4007000000000000 standby
lunpath 37 0/3/0/0/0/1.0x5001438004c673f9.0x4008000000000000 standby
tgtpath 5 0/3/0/0/0/1.0x5001438004c673fd online
lunpath 5 0/3/0/0/0/1.0x5001438004c673fd.0x0 online
lunpath 20 0/3/0/0/0/1.0x5001438004c673fd.0x4001000000000000 online
lunpath 28 0/3/0/0/0/1.0x5001438004c673fd.0x4002000000000000 standby
lunpath 29 0/3/0/0/0/1.0x5001438004c673fd.0x4003000000000000 online
lunpath 30 0/3/0/0/0/1.0x5001438004c673fd.0x4004000000000000 standby
lunpath 31 0/3/0/0/0/1.0x5001438004c673fd.0x4005000000000000 standby
lunpath 32 0/3/0/0/0/1.0x5001438004c673fd.0x4006000000000000 standby
lunpath 33 0/3/0/0/0/1.0x5001438004c673fd.0x4007000000000000 online
lunpath 36 0/3/0/0/0/1.0x5001438004c673fd.0x4008000000000000 online
ba 5 0/4 N/A
ba 6 0/4/0/0 N/A
slot 1 0/4/0/0/0 N/A
ba 7 0/5 N/A
ba 8 0/5/0/0 N/A
slot 2 0/5/0/0/0 N/A
processor 0 120 N/A
processor 1 122 N/A
processor 2 124 N/A
processor 3 126 N/A
processor 4 128 N/A
processor 5 130 N/A
processor 6 132 N/A
processor 7 134 N/A
ba 9 250 N/A
ipmi 0 250/0 N/A
tty 3 250/1 N/A
acpi_node 0 250/2 N/A
usbmsvbus 0 64000/0x0 N/A
escsi_ctlr 0 64000/0x0/0x0 online
esvroot 0 64000/0xfa00 N/A
disk 2 64000/0xfa00/0x0 online
disk 3 64000/0xfa00/0x1 online
ctl 8 64000/0xfa00/0x5 online
disk 32 64000/0xfa00/0x6 online
disk 33 64000/0xfa00/0x7 online
disk 34 64000/0xfa00/0x8 online
disk 35 64000/0xfa00/0x9 online
disk 36 64000/0xfa00/0xa online
disk 37 64000/0xfa00/0xb online
disk 38 64000/0xfa00/0xc online
disk 43 64000/0xfa00/0x10 online
We have no problems, just opportunities !
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

#> ioscan -P ms_scan_time
Class I H/W Path ms_scan_time
=======================================
root 0 N/A
ioa 0 0 N/A
ba 0 0/0 N/A
tty 0 0/0/1/0 N/A
tty 1 0/0/1/1 N/A
tty 2 0/0/1/2 N/A
usb 0 0/0/2/0 N/A
usbcomp 0 0/0/2/0.1 N/A
usbhid 0 0/0/2/0.1.0 N/A
usbhid 1 0/0/2/0.1.1 N/A
usb 1 0/0/2/1 N/A
usb 2 0/0/2/2 N/A
graphics 0 0/0/4/0 N/A
ba 1 0/1 N/A
lan 0 0/1/1/0 N/A
lan 1 0/1/1/1 N/A
ba 2 0/2 N/A
escsi_ctlr 1 0/2/1/0 0 min 20 sec 488 ms
tgtpath 0 0/2/1/0.0x5000c50016c16379 0 min 0 sec 4 ms
lunpath 0 0/2/1/0.0x5000c50016c16379.0x0 0 min 0 sec 3 ms
tgtpath 1 0/2/1/0.0x5000c50016c23ea5 0 min 0 sec 4 ms
lunpath 1 0/2/1/0.0x5000c50016c23ea5.0x0 0 min 0 sec 3 ms
lan 2 0/2/2/0 N/A
lan 3 0/2/2/1 N/A
ba 3 0/3 N/A
ba 4 0/3/0/0 N/A
slot 0 0/3/0/0/0 N/A
fc 0 0/3/0/0/0/0 0 min 2 sec 104 ms
tgtpath 2 0/3/0/0/0/0.0x5001438004c673f8 0 min 0 sec 92 ms
lunpath 2 0/3/0/0/0/0.0x5001438004c673f8.0x0 0 min 0 sec 6 ms
lunpath 7 0/3/0/0/0/0.0x5001438004c673f8.0x4001000000000000 0 min 0 sec 83 ms
lunpath 8 0/3/0/0/0/0.0x5001438004c673f8.0x4002000000000000 0 min 0 sec 83 ms
lunpath 9 0/3/0/0/0/0.0x5001438004c673f8.0x4003000000000000 0 min 0 sec 83 ms
lunpath 10 0/3/0/0/0/0.0x5001438004c673f8.0x4004000000000000 0 min 0 sec 83 ms
lunpath 11 0/3/0/0/0/0.0x5001438004c673f8.0x4005000000000000 0 min 0 sec 83 ms
lunpath 12 0/3/0/0/0/0.0x5001438004c673f8.0x4006000000000000 0 min 0 sec 84 ms
lunpath 15 0/3/0/0/0/0.0x5001438004c673f8.0x4007000000000000 0 min 0 sec 84 ms
lunpath 35 0/3/0/0/0/0.0x5001438004c673f8.0x4008000000000000 0 min 0 sec 84 ms
tgtpath 3 0/3/0/0/0/0.0x5001438004c673fc 0 min 0 sec 98 ms
lunpath 3 0/3/0/0/0/0.0x5001438004c673fc.0x0 0 min 0 sec 10 ms
lunpath 6 0/3/0/0/0/0.0x5001438004c673fc.0x4001000000000000 0 min 0 sec 85 ms
lunpath 13 0/3/0/0/0/0.0x5001438004c673fc.0x4002000000000000 0 min 0 sec 85 ms
lunpath 14 0/3/0/0/0/0.0x5001438004c673fc.0x4003000000000000 0 min 0 sec 85 ms
lunpath 16 0/3/0/0/0/0.0x5001438004c673fc.0x4004000000000000 0 min 0 sec 85 ms
lunpath 17 0/3/0/0/0/0.0x5001438004c673fc.0x4005000000000000 0 min 0 sec 85 ms
lunpath 18 0/3/0/0/0/0.0x5001438004c673fc.0x4006000000000000 0 min 0 sec 85 ms
lunpath 19 0/3/0/0/0/0.0x5001438004c673fc.0x4007000000000000 0 min 0 sec 85 ms
lunpath 34 0/3/0/0/0/0.0x5001438004c673fc.0x4008000000000000 0 min 0 sec 85 ms
fc 1 0/3/0/0/0/1 0 min 2 sec 96 ms
tgtpath 4 0/3/0/0/0/1.0x5001438004c673f9 0 min 0 sec 92 ms
lunpath 4 0/3/0/0/0/1.0x5001438004c673f9.0x0 0 min 0 sec 6 ms
lunpath 21 0/3/0/0/0/1.0x5001438004c673f9.0x4001000000000000 0 min 0 sec 83 ms
lunpath 22 0/3/0/0/0/1.0x5001438004c673f9.0x4002000000000000 0 min 0 sec 84 ms
lunpath 23 0/3/0/0/0/1.0x5001438004c673f9.0x4003000000000000 0 min 0 sec 84 ms
lunpath 24 0/3/0/0/0/1.0x5001438004c673f9.0x4004000000000000 0 min 0 sec 84 ms
lunpath 25 0/3/0/0/0/1.0x5001438004c673f9.0x4005000000000000 0 min 0 sec 84 ms
lunpath 26 0/3/0/0/0/1.0x5001438004c673f9.0x4006000000000000 0 min 0 sec 84 ms
lunpath 27 0/3/0/0/0/1.0x5001438004c673f9.0x4007000000000000 0 min 0 sec 84 ms
lunpath 37 0/3/0/0/0/1.0x5001438004c673f9.0x4008000000000000 0 min 0 sec 84 ms
tgtpath 5 0/3/0/0/0/1.0x5001438004c673fd 0 min 0 sec 95 ms
lunpath 5 0/3/0/0/0/1.0x5001438004c673fd.0x0 0 min 0 sec 8 ms
lunpath 20 0/3/0/0/0/1.0x5001438004c673fd.0x4001000000000000 0 min 0 sec 85 ms
lunpath 28 0/3/0/0/0/1.0x5001438004c673fd.0x4002000000000000 0 min 0 sec 85 ms
lunpath 29 0/3/0/0/0/1.0x5001438004c673fd.0x4003000000000000 0 min 0 sec 85 ms
lunpath 30 0/3/0/0/0/1.0x5001438004c673fd.0x4004000000000000 0 min 0 sec 85 ms
lunpath 31 0/3/0/0/0/1.0x5001438004c673fd.0x4005000000000000 0 min 0 sec 85 ms
lunpath 32 0/3/0/0/0/1.0x5001438004c673fd.0x4006000000000000 0 min 0 sec 85 ms
lunpath 33 0/3/0/0/0/1.0x5001438004c673fd.0x4007000000000000 0 min 0 sec 85 ms
lunpath 36 0/3/0/0/0/1.0x5001438004c673fd.0x4008000000000000 0 min 0 sec 85 ms
ba 5 0/4 N/A
ba 6 0/4/0/0 N/A
slot 1 0/4/0/0/0 N/A
ba 7 0/5 N/A
ba 8 0/5/0/0 N/A
slot 2 0/5/0/0/0 N/A
processor 0 120 N/A
processor 1 122 N/A
processor 2 124 N/A
processor 3 126 N/A
processor 4 128 N/A
processor 5 130 N/A
processor 6 132 N/A
processor 7 134 N/A
ba 9 250 N/A
ipmi 0 250/0 N/A
tty 3 250/1 N/A
acpi_node 0 250/2 N/A
usbmsvbus 0 64000/0x0 N/A
escsi_ctlr 0 64000/0x0/0x0 0 min 0 sec 0 ms
esvroot 0 64000/0xfa00 N/A
disk 2 64000/0xfa00/0x0 0min 0sec 0ms
disk 3 64000/0xfa00/0x1 0min 0sec 0ms
ctl 8 64000/0xfa00/0x5 0min 0sec 0ms
disk 32 64000/0xfa00/0x6 0min 0sec 0ms
disk 33 64000/0xfa00/0x7 0min 0sec 0ms
disk 34 64000/0xfa00/0x8 0min 0sec 0ms
disk 35 64000/0xfa00/0x9 0min 0sec 0ms
disk 36 64000/0xfa00/0xa 0min 0sec 0ms
disk 37 64000/0xfa00/0xb 0min 0sec 0ms
disk 38 64000/0xfa00/0xc 0min 0sec 0ms
disk 43 64000/0xfa00/0x10 0min 0sec 0ms
We have no problems, just opportunities !
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

We are currently running HP-UX 11i v3 patched to March 2010 on Blade 370 IA64 with SAN EVA 4400 with latest firmware.

Upgraded FC Driver to latest version ....
# FC-FCD B.11.31.1005 FibreChannel (FCD) Driver
FC-FCD.FC-FCD-KRN B.11.31.1005 Fibre Channel Driver (FCD) Kernel Module
FC-FCD.FC-FCD-RUN B.11.31.1005 Fibre Channel Driver (FCD) User Space files

Also added some patches .....
We have no problems, just opportunities !
chris huys_4
Honored Contributor

Re: sar -d 100% Utilization disk33 eva san disk

Hi Ron,

Execute sar -d 1 50.

Always use the smallest time interval possible when executing sar -d, 1 second in this case. Higher, time intervals, "averages" out to much, which makes interpretation hazardous.

Increase, i.e. double up, the scsi queue max depth parameter for disk32 and disk33, looks like the eva can take more IO then currently the host is providing, so why not stress the eva some more...

Greetz,
Chris
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

pinto #> sar -d 1 50

17:16:07 device %busy avque r+w/s blks/s avwait avserv
17:16:08 disk2 3.00 0.50 4 102 0.00 10.94
disk3 5.00 0.50 5 106 0.00 9.25
disk32 77.00 0.50 596 5554 0.00 1.30
disk33 89.00 0.50 348 6776 0.00 2.67
17:16:09 disk32 76.00 0.50 647 6160 0.00 1.17
disk33 87.00 0.50 593 9736 0.00 1.54
17:16:10 disk32 96.00 0.50 582 5000 0.00 1.65
disk33 91.00 0.50 274 3944 0.00 3.28
17:16:11 disk2 2.02 0.50 2 32 0.00 8.99
disk3 1.01 0.50 2 32 0.00 5.19
disk32 61.62 0.50 611 6182 0.00 1.01
disk33 90.91 0.50 485 10214 0.00 2.33
17:16:12 disk2 1.00 0.50 1 8 0.00 6.59
disk3 1.00 0.50 1 8 0.00 12.59
disk32 60.00 0.50 423 4256 0.00 1.42
disk33 91.00 0.50 401 7160 0.00 2.52
17:16:13 disk3 0.99 0.50 1 8 0.00 9.92
disk32 63.37 0.50 635 5616 0.00 1.01
disk33 100.00 0.50 407 5766 0.00 2.44
17:16:14 disk2 2.02 0.50 2 85 0.00 15.17
disk3 1.01 0.50 2 85 0.00 10.02
disk32 52.53 0.50 395 4865 0.00 1.33
disk33 100.00 0.50 411 11442 0.00 4.13
17:16:15 disk2 1.96 0.50 2 16 0.00 6.83
disk3 2.94 0.50 3 20 0.00 8.87
disk32 70.59 0.50 490 4439 0.00 1.44
disk33 76.47 0.50 317 4047 0.00 2.45
17:16:16 disk2 2.02 0.50 3 63 0.00 8.62
disk3 2.02 0.50 3 63 0.00 8.12
disk32 65.66 0.50 566 5172 0.00 1.15
disk33 91.92 0.50 502 6747 0.00 1.85
17:16:17 disk2 0.99 0.50 1 8 0.00 7.68
disk3 0.99 0.50 1 8 0.00 10.06
disk32 94.06 0.50 481 4246 0.00 1.97
disk33 69.31 0.50 286 3976 0.00 2.43
We have no problems, just opportunities !
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

I set it for 32, and disks came off of the 100% util a bit. Then i went to 64 and sar output is below.

My question is what is the magic number and how far can you go?

How can you make this permanent just for the san disks, and leave root disks as is?

for i in 32 33 34 35 36 37; do scsictl -m queue_depth=64 /dev/rdisk/disk$i;done

pinto #> sar -d 1 100

HP-UX pinto B.11.31 U ia64 05/24/10

18:47:14 device %busy avque r+w/s blks/s avwait avserv
18:47:15 disk2 3.00 0.50 4 32 0.00 7.06
disk3 3.00 0.50 4 32 0.00 7.51
disk32 70.00 0.50 1075 10808 0.00 0.66
18:47:16 disk32 79.21 0.50 725 7303 0.00 1.10
18:47:17 disk2 12.12 0.50 15 194 0.00 21.11
disk3 19.19 0.50 103 1362 0.00 4.24
disk32 63.64 0.50 1231 11996 0.00 0.52
18:47:18 disk2 1.00 0.50 3 32 0.00 5.11
disk3 2.00 0.50 3 32 0.00 5.56
disk32 63.00 0.50 1119 11650 0.00 0.56
18:47:19 disk32 60.61 0.50 1070 11135 0.00 0.57
18:47:20 disk2 1.98 0.50 3 44 0.00 5.53
disk3 3.96 0.50 4 48 0.00 9.95
disk32 69.31 0.50 1215 12586 0.00 0.57
18:47:21 disk2 2.00 0.50 2 24 0.00 7.77
disk3 3.00 0.50 3 28 0.00 9.53
disk32 65.00 0.50 1338 13768 0.00 0.48
18:47:22 disk32 68.00 0.50 1457 15848 0.00 0.47
18:47:23 disk2 2.00 0.50 3 34 0.00 7.96
disk3 6.00 0.50 23 386 0.00 3.11
disk32 70.00 0.50 1392 15448 0.00 0.51
We have no problems, just opportunities !

Re: sar -d 100% Utilization disk33 eva san disk

Ron,

>> My question is what is the magic number and how far can you go?


Well most EVA ports have a queue depth of 1536 (what model do you have?)

But that doesn't mean you can set the qdepth to 1536! You have to take into account _all_ the LUNs and _all_ the hosts that access the EVA, and make sure there's no way you could possibly have more than 1536 outstanding IOs on all LUNs to all hosts on a given port.

So if this host sees 4 different LUNs through one port on the EVA, and no other systems use the EVA, then you would be able to set the queue depth for that LUN on that port to 1536 / 4. Of course if you had a Windows system accessing 2 LUNs on that port as well you'd need to take that into account as well - I seem to recall that Windows default queue depth is 32, so in this case your queue depths would be (1536 - (2 * 32)) / 4. Hopefully that makes sense.

So the first thing to ask is "is the EVA accessed by other systems as well as this one?" If the answer is _no_ then you could probably setup Target port based queue depth management as described in this whitepaper (which you could do worse than read anyway):

http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02054623/c02054623.pdf

HTH

Duncan

I am an HPE Employee
Accept or Kudo
S. Ney
Trusted Contributor

Re: sar -d 100% Utilization disk33 eva san disk

Ron,

Your i/o looks pretty evenly balanced across the hba's. Your avwait and avserv are also pretty good. (0 for avewait and <5 for aveserv) Your scan times were also pretty good. This doesn't look like its a server/storage issue.

I'd ask your dba/application teams what specifically is on disk33. It may be a case where the busiest tables and indexes are on that disk. Can they move something to one of the less busy disks? Are there any patches available for informix running on your platform? I have oracle databases on my servers and most of the time an oracle patch fixes any slow i/o's.

Also if you have glance you could run export/extract in order to collect some metrics that you can present to your app/dba team. They can see when the disk is busiest and correlate what the database is doing at that time.
Ron Marchak_1
Advisor

Re: sar -d 100% Utilization disk33 eva san disk

Working with HP Support in this issue further.
We have no problems, just opportunities !