1834710 Members
2709 Online
110069 Solutions
New Discussion

sar -d

 
Susik
Advisor

sar -d

How much real information it shows sar-d, disks LUN from storage XP12000. Server rp4440.
00:00:01 device %busy avque r+w/s blks/s avwait avserv
01:00:01 c2t1d0 2.01 0.74 3 25 1.12 9.41
c2t0d0 1.17 0.76 2 19 1.08 7.34
c7t0d0 99.97 3.50 15 1075 0.00 4.84
c6t0d1 98.97 1.50 9 671 0.00 6.04
c7t0d2 98.12 1.50 10 777 0.00 5.67
c6t0d3 99.99 2.50 8 637 0.00 5.73
c7t0d4 98.92 1.50 12 866 0.00 4.39
c6t0d5 99.02 1.50 8 623 0.00 5.46
c7t0d6 98.93 1.50 11 824 0.00 4.62
c6t0d7 6.12 0.50 11 796 0.00 5.89
c7t1d0 5.72 0.50 11 803 0.00 5.31
c6t1d1 98.84 1.50 10 765 0.00 5.50
c7t1d2 5.62 0.50 10 757 0.00 5.75
c6t1d3 6.59 0.50 13 967 0.00 5.23
c7t1d4 98.57 1.50 12 927 0.00 5.47
c6t1d5 6.73 0.50 12 909 0.00 5.58
c7t1d6 7.69 0.50 14 977 0.00 5.77
c6t1d7 8.59 0.50 15 1063 0.00 5.89
c7t2d0 5.80 0.50 10 791 0.00 5.68
c6t2d1 5.76 0.50 10 746 0.00 5.75
c7t2d2 5.88 0.50 10 774 0.00 5.73
c6t2d3 5.40 0.50 9 720 0.00 5.87
c7t2d4 5.75 0.50 10 767 0.00 5.82
c6t2d5 7.81 0.50 13 929 0.00 6.19
c7t2d6 8.58 0.50 15 1087 0.00 5.95
c6t2d7 8.67 0.50 14 1020 0.00 6.19
c7t3d0 7.80 0.50 13 850 0.00 6.30
c7t3d1 5.31 0.50 9 579 0.00 5.97
6 REPLIES 6
RAC_1
Honored Contributor

Re: sar -d

For SAN disks, it is little extra study is required couldpled with glance/sar/iostat information. The disk that appers to be one disk to OS, may in turn be couple of disks on SAN and may be stripped. In such cases you need to look at performance metrics provided by SAN.

Looking at sar output, all looks ok. there are no avwait. aserve looks ok.
There is no substitute to HARDWORK
Sivakumar TS
Honored Contributor

Re: sar -d

Hi,

It is the current utilixation of the disks,

it means...


%busy Portion of time device was busy servicing a request;

avque Average number of requests outstanding for the device;

r+w/s Number of data transfers per second (read and writes) from and to the device;

blks/s Number of bytes transferred (in 512-byte units) from and to the device;

avwait Average time (in milliseconds) that transfer requests waited idly on queue for the device;

avserv Average time (in milliseconds) to service each transfer request (includes seek, rotational latency, and data transfer times) for the device.

from the above output,

c7t0d0 99.97 3.50 15 1075 0.00 4.84
c6t0d1 98.97 1.50 9 671 0.00 6.04
c7t0d2 98.12 1.50 10 777 0.00 5.67
c6t0d3 99.99 2.50 8 637 0.00 5.73
c7t0d4 98.92 1.50 12 866 0.00 4.39
c6t0d5 99.02 1.50 8 623 0.00 5.46
c7t0d6 98.93 1.50 11 824 0.00 4.62

the above disks were the TOP - BUSY disks.

Regards,

Siva.



Nothing is Impossible !
Susik
Advisor

Re: sar -d

Is it true that the disks are loaded up to 99 % or I am supposed to have a look at utilization of LUN onto storage XP12000?
Or I am supposed to check out utilization of disks using glance/iostat on server?
RAC_1
Honored Contributor

Re: sar -d

I am repeating what I said. For an OS, there is no way to know that the disk that it sees, may in fact be no. of disks on SAN box. If you see high i/o, util, or in glance for SAN disk, you should not get tense, unless you see the same from SAN performance metrics.
There is no substitute to HARDWORK
Susik
Advisor

Re: sar -d

Ok thanks!
Patrice Le Guyader
Respected Contributor

Re: sar -d

Hello,

As RAC said it's not really a physical disk. I assumed that this array is running like an XP1024 so you'll have two parts, one is call frontend and the other the backend. What your server is seeing is the frontend, a part of one Array Group(LUN) or severals(LUSE) true the cache of the array.

So a big part of your server IOs don't arrive to the backend (ACP/disks), they are assumed by the array cache and on a XP1024 as I can remember an AG can support between 600 to 800 IO/S. As what can be seen is to have on the same AG many disks seen by many servers and all of them don't have to exceed the global 800 IO/S on the AG.
Unfortunately if you want to have some data on the backend you'll have to get Performance Advisor.(I think)

When you do an xpinfo -f/dev/rdsk/c7t3d1 you have the Cu:Ldev info which is the "address" of your part of physical disk in the array. You can also see the Array Group reference.

Hope this helps
Pat
Good judgement comes with experience. Unfortunately, the experience usually comes from bad judgement.