Disk Enclosures
1753511 Members
4892 Online
108795 Solutions
New Discussion юеВ

How do I get FC60 I/O stats

 
SOLVED
Go to solution
Sean OB_1
Honored Contributor

How do I get FC60 I/O stats

Hello.

I have an L2000 running 11.00 connected to an FC60 array.

There are two luns connected to the L2000 over 4 fibre channels (2 primary, 2 alternate). Each lun has 6 18gb disks in a raid 5 group.

Right now sar -d shows the disks at 96% usage.

Is there a way I can determine if it is the channels that are bottlenecked, or the spindles themselves?

TIA,

Sean
4 REPLIES 4
Joaquin Gil de Vergara
Respected Contributor

Re: How do I get FC60 I/O stats

install Glance Plus to see real-time statistics...
install measureware to collect an historic data
this can be very useful to you.... to see graphs use perf view
Teach is the best way to learn
Sean OB_1
Honored Contributor

Re: How do I get FC60 I/O stats

I have glance and measureware on the server. I can get bydisk metrics but that shows i/o rate not percentages. So I have stats but I have no idea what the max i/o for a particular disk is.

I'm not that familiar with glance, can that give me percentages for a particular i/o card, and can it drill down to the disks and show percentages for each disk?
Ashwani Kashyap
Honored Contributor

Re: How do I get FC60 I/O stats

Have you tried any of the FC 60 arrray manager commands .
They should all be in /opt/hparray/bin .

I am no where near my array or else I could have dug in some more .
Brian M Rawlings
Honored Contributor
Solution

Re: How do I get FC60 I/O stats

Sean:

You may have to do your own benchmarking. I/O maximums vary due to many factors, I think that Glance will only show rate, not percentage, since percentage requires a max figure for the 100% mark, and that is indeterminate.

To do some rough benchmarking on your system, you need to find a time when the system is quiescent, and do solid I/O to one LUN or volume group. Use 'dd' to move data at max rate

dd -if /dev/dsk/c?t?d? -of /dev/null -bs 8192 (for instance; block size should equal LUN stripe size, Be careful with dd -- get it backward and you wipe your LUN clean)

Watch Glance while this is going on, or use MWA to capture data every 10 seconds, etc. This is not a good "real world" figure, LVM cannot act like 'dd', but it is a good idea for the max I/O the channel and LUN can sustain.

Other things to try, for each LUN, to set rough benchmarks, would be:
1> Copy a large file from each of the file systems on the array to /dev/null
2> Copy a large file from a root volume (/tmp) to each FS on your array, and also copy them back (testing both read and write speeds).
3> Copy a large file from one array FS to another array FS, and back. This runs the array I/O channels, both directions, at max (well, LVM max).
4> If you can do this before LVM is set up on the array, you can do all these things with 'dd' as well, reading and writing at max channel and array speeds. This is as close to the 100% figure for your system as you can get. Most people can't do 'dd' this way, since they have LVM running all the LUNs...

One other thing to think about: the FC60 supports up to 31 LUNs, which is the smallest disk set that HP-UX can see. Many people just add LUNs to VGs and run, which is all right, but with, say, two LUNs in a VG, you will fill one LUN before extents in the other LUN are used at all. You can improve performance by having multiple LUNs in a VG, and then striping across each LUN, so all get used in round robin fashion.

This is not a novel concept, just one that is hard to do after the fact, and a lot of folks don't think about it up front. I have seen startling performance improvements by adding striping across three LUNs in a VG, rather than just one at a time.

Good luck, and be careful doing this, 'dd' makes a wonderful LVM erradicator. "Measure twice, cut once" works for computers too.

Regards, --bmr
We must indeed all hang together, or, most assuredly, we shall all hang separately. (Benjamin Franklin)