- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- FC60 LVM performance and iostat
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 05:57 AM
07-26-2001 05:57 AM
FC60 LVM performance and iostat
Here's the scenario:
I've got an FC60 disk array shared between two N-Class machines in a Serviceguard environment.
I've built luns that are used by lvols. I've striped these lvols over the luns to increase performance. When I added these luns to the vg, I added both the primary and the alternate paths AND I alternated which path should be the primary, to allow the paths to share the data load.
Now - when I do an iostat, it shows only the primary path as being used, when it was my understanding that when you added both paths to the vg, that LVM would use both paths for accessing the array.
Is this an iostat problem? Sar data only shows the primary path.
thanks,
C
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 06:17 AM
07-26-2001 06:17 AM
Re: FC60 LVM performance and iostat
When a LUN is bound, you must identify which disk array controller owns the LUN. The controller that is assigned ownership servers as promary I/O path to the LUN.
The other controller servers as the secondary path to the LUN.
The primary I/O path established using LVM defines the owning controller for the LUN.
This may override the controller ownership defined when the LUN was bound. For example, if controller A was identified as the owning controller when the LUN was bound, and LVM subsequently established the primary path to the LUN through controller B, controller B becomes the owning controller.
My throught is that you should see both I/O through controller A and I/O through contoller B accessing to all data on the LUN if FC60 are configured in a good manner in order to balance I/O load.
Try other applications like Glance, sar to make sure that only one controller is not overloaded with I/O requests.
Hope this helps,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 06:35 AM
07-26-2001 06:35 AM
Re: FC60 LVM performance and iostat
Advanced user's guide shows...
for change the controller ownership of a LUN.
amcfg -M
I understand, that you can change the primary path of each LUN, then you can assign half of LUNs to A controller and the other half to B controller, and make the same in UX, I understand also that if you make mistake addings device files to VG in correct order, ownership of LUN can change, I'm not fully sure on that point
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 06:40 AM
07-26-2001 06:40 AM
Re: FC60 LVM performance and iostat
Glance shows similar results, with the primary being used and the alternate showing no use.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 06:46 AM
07-26-2001 06:46 AM
Re: FC60 LVM performance and iostat
thanks for the help. I think I've figured it out.
When adding the disks, you're basically taking advantage of the two paths by choosing which path will be primary and alternating which one that is. This is the way I've got it laid out and I'm ok with how this is working now, I was just a little confused about how LVM was working (or wasn't working).
tx,
C
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 06:54 AM
07-26-2001 06:54 AM
Re: FC60 LVM performance and iostat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 07:42 AM
07-26-2001 07:42 AM
Re: FC60 LVM performance and iostat
You did everything as you should have done. However, LVM does *not* load balance between its primary and its alternate links. The primary link is always used unless it fails and then the alternate is selected. What you have done is provide high-availability in your disk array.
A welcome change comes with HP-UX 11.11 and Veritas Volume Manager (VxVM) version 3.1. With this, you *can* do load balancing.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-26-2001 07:44 AM
07-26-2001 07:44 AM
Re: FC60 LVM performance and iostat
I just want to underline a point : LUN's in FC60 (but like other arrays), are already using Striping in Raid5 setting : if you built LUN with Raid 5 Level, i don't think striping could increase a lot the perfs. It could be valid only for Raid 1.
Regards.
PJA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-30-2001 02:04 AM
07-30-2001 02:04 AM
Re: FC60 LVM performance and iostat
This is not really answering your question. But, if you run measureware a good way of seeing how well your LUNS are performing is by taking the average disk utilisation multiply this by 10 & divide by the average physical IO rate. This gives you the average IO time in miliseconds.
AvgIOTime ~ [BYDSK_DISK_UTIL * 10]/[BYDSK_PHYS_IO_RATE]
You may be surprised how low it is. You can also estimate the transfere rate by multiplying the block size by the IO rate.
If you then look up you disk latency, seek & transfere rates & compare with above. It gives a fairly good indicator of how well your disks are performing.
Just a thought
Tim