- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- How do I get FC60 I/O stats
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-19-2002 09:52 AM
тАО09-19-2002 09:52 AM
I have an L2000 running 11.00 connected to an FC60 array.
There are two luns connected to the L2000 over 4 fibre channels (2 primary, 2 alternate). Each lun has 6 18gb disks in a raid 5 group.
Right now sar -d shows the disks at 96% usage.
Is there a way I can determine if it is the channels that are bottlenecked, or the spindles themselves?
TIA,
Sean
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-20-2002 07:34 AM
тАО09-20-2002 07:34 AM
Re: How do I get FC60 I/O stats
install measureware to collect an historic data
this can be very useful to you.... to see graphs use perf view
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-20-2002 10:22 AM
тАО09-20-2002 10:22 AM
Re: How do I get FC60 I/O stats
I'm not that familiar with glance, can that give me percentages for a particular i/o card, and can it drill down to the disks and show percentages for each disk?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-20-2002 04:30 PM
тАО09-20-2002 04:30 PM
Re: How do I get FC60 I/O stats
They should all be in /opt/hparray/bin .
I am no where near my array or else I could have dug in some more .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-25-2002 07:43 AM
тАО09-25-2002 07:43 AM
SolutionYou may have to do your own benchmarking. I/O maximums vary due to many factors, I think that Glance will only show rate, not percentage, since percentage requires a max figure for the 100% mark, and that is indeterminate.
To do some rough benchmarking on your system, you need to find a time when the system is quiescent, and do solid I/O to one LUN or volume group. Use 'dd' to move data at max rate
dd -if /dev/dsk/c?t?d? -of /dev/null -bs 8192 (for instance; block size should equal LUN stripe size, Be careful with dd -- get it backward and you wipe your LUN clean)
Watch Glance while this is going on, or use MWA to capture data every 10 seconds, etc. This is not a good "real world" figure, LVM cannot act like 'dd', but it is a good idea for the max I/O the channel and LUN can sustain.
Other things to try, for each LUN, to set rough benchmarks, would be:
1> Copy a large file from each of the file systems on the array to /dev/null
2> Copy a large file from a root volume (/tmp) to each FS on your array, and also copy them back (testing both read and write speeds).
3> Copy a large file from one array FS to another array FS, and back. This runs the array I/O channels, both directions, at max (well, LVM max).
4> If you can do this before LVM is set up on the array, you can do all these things with 'dd' as well, reading and writing at max channel and array speeds. This is as close to the 100% figure for your system as you can get. Most people can't do 'dd' this way, since they have LVM running all the LUNs...
One other thing to think about: the FC60 supports up to 31 LUNs, which is the smallest disk set that HP-UX can see. Many people just add LUNs to VGs and run, which is all right, but with, say, two LUNs in a VG, you will fill one LUN before extents in the other LUN are used at all. You can improve performance by having multiple LUNs in a VG, and then striping across each LUN, so all get used in round robin fashion.
This is not a novel concept, just one that is hard to do after the fact, and a lot of folks don't think about it up front. I have seen startling performance improvements by adding striping across three LUNs in a VG, rather than just one at a time.
Good luck, and be careful doing this, 'dd' makes a wonderful LVM erradicator. "Measure twice, cut once" works for computers too.
Regards, --bmr