Hi,
maybe You could need some numbers to compare...
I've simply run
time find / 2>/dev/null 1>/dev/null & sar -d 30 1
for explanation:
- my system has two stone aged 4.3GB scsi drives, only one of them get's searched during those 30 seconds.
- I ran this right after boot, so that the buffer cache is not too much an issue.
- please understand that even a blazingly fast 15k drive would show 75% busy in that situation, but it would be done with the task in 1/10 of the time. Your disks don't show high busy rates or extreme queue lengths, so I wouldn't point at them first.
Those disks don't have much to do at all.
(see below)
HP-UX snowwhit B.11.11 U 9000/800 01/13/05
23:35:57 device %busy avque r+w/s blks/s avwait avserv
23:36:07 c0t6d0 69.73 1.01 101 978 10.66 17.33
high busy, low read ratio, few blocks/sec -> this disk barely keeps up with its workload.
not the case with Yours, it seems to me.
to get the raw disk throughput i often just use the below.
snowwhite:/var/adm/syslog# time dd if=/dev/rdsk/c0t6d0 of=/dev/null bs=1024k >
100+0 records in
100+0 records out
real 0m21.39s
that's 5MB/s, the disk isn't quite being fast, but within it's specs.
@work current drives spit out 75MB/s, unless 'someone' connected the tape/cdrom to the fast U160 hba and the disks to UW.
With real applications the transfer rate is not that much of an issue, but it still means, that a current disk would only have needed 1/15 the time mine did. as simple as that.
anyway, like it's already been said - look at the application (maybe even think about using tusc to gather some low-level data), the disks may be a bottleneck, but in Your sar output they don't even get the chance to prove it. :)
yesterday I stood at the edge. Today I'm one step ahead.