Operating System - HP-UX
1832694 Members
2861 Online
110043 Solutions
New Discussion

FC60 LVM performance and iostat

 
Charles McCary
Valued Contributor

FC60 LVM performance and iostat

Group,

Here's the scenario:

I've got an FC60 disk array shared between two N-Class machines in a Serviceguard environment.

I've built luns that are used by lvols. I've striped these lvols over the luns to increase performance. When I added these luns to the vg, I added both the primary and the alternate paths AND I alternated which path should be the primary, to allow the paths to share the data load.

Now - when I do an iostat, it shows only the primary path as being used, when it was my understanding that when you added both paths to the vg, that LVM would use both paths for accessing the array.

Is this an iostat problem? Sar data only shows the primary path.


thanks,

C
8 REPLIES 8
Insu Kim
Honored Contributor

Re: FC60 LVM performance and iostat

In FC60 manual,
When a LUN is bound, you must identify which disk array controller owns the LUN. The controller that is assigned ownership servers as promary I/O path to the LUN.
The other controller servers as the secondary path to the LUN.

The primary I/O path established using LVM defines the owning controller for the LUN.
This may override the controller ownership defined when the LUN was bound. For example, if controller A was identified as the owning controller when the LUN was bound, and LVM subsequently established the primary path to the LUN through controller B, controller B becomes the owning controller.

My throught is that you should see both I/O through controller A and I/O through contoller B accessing to all data on the LUN if FC60 are configured in a good manner in order to balance I/O load.
Try other applications like Glance, sar to make sure that only one controller is not overloaded with I/O requests.

Hope this helps,
Never say "no" first.
David Navarro
Respected Contributor

Re: FC60 LVM performance and iostat

Hi all,
Advanced user's guide shows...

for change the controller ownership of a LUN.
amcfg -M -c
I understand, that you can change the primary path of each LUN, then you can assign half of LUNs to A controller and the other half to B controller, and make the same in UX, I understand also that if you make mistake addings device files to VG in correct order, ownership of LUN can change, I'm not fully sure on that point

David
Charles McCary
Valued Contributor

Re: FC60 LVM performance and iostat

Other info. that I neglected to mention. We have three disk cabinets containing 10 disks each. When I built the LUNS I built them using 1 disk from each cabinet. So the LUNs are laid out for performance. Then I added 4 of these LUNs to a VG. I then built each lvol striped across all 4 LUNs.

Glance shows similar results, with the primary being used and the alternate showing no use.
Charles McCary
Valued Contributor

Re: FC60 LVM performance and iostat

Guys,

thanks for the help. I think I've figured it out.

When adding the disks, you're basically taking advantage of the two paths by choosing which path will be primary and alternating which one that is. This is the way I've got it laid out and I'm ok with how this is working now, I was just a little confused about how LVM was working (or wasn't working).

tx,

C
David Navarro
Respected Contributor

Re: FC60 LVM performance and iostat

I can see that you're looking for best performance, If you don't think upgrade FC-60 with new disks, I suggest you that configures SC-10 in split bus and use all 6 scsi channels of FC-60, then you can make 6 disk LUN shared acroos all 6 SCSI FC.60 box. Then you will have 2 LUN's and assign 1 to each channel on FC-60. If you do this, then you can vgcreate without OS stripping, I think that it wouldn't affect a lot in performance. (may be)...
James R. Ferguson
Acclaimed Contributor

Re: FC60 LVM performance and iostat

Hi Charles:

You did everything as you should have done. However, LVM does *not* load balance between its primary and its alternate links. The primary link is always used unless it fails and then the alternate is selected. What you have done is provide high-availability in your disk array.

A welcome change comes with HP-UX 11.11 and Veritas Volume Manager (VxVM) version 3.1. With this, you *can* do load balancing.

Regards!

...JRF...
JACQUET
Frequent Advisor

Re: FC60 LVM performance and iostat

Hello,

I just want to underline a point : LUN's in FC60 (but like other arrays), are already using Striping in Raid5 setting : if you built LUN with Raid 5 Level, i don't think striping could increase a lot the perfs. It could be valid only for Raid 1.

Regards.

PJA
PJA
Tim D Fulford
Honored Contributor

Re: FC60 LVM performance and iostat

Charles

This is not really answering your question. But, if you run measureware a good way of seeing how well your LUNS are performing is by taking the average disk utilisation multiply this by 10 & divide by the average physical IO rate. This gives you the average IO time in miliseconds.

AvgIOTime ~ [BYDSK_DISK_UTIL * 10]/[BYDSK_PHYS_IO_RATE]

You may be surprised how low it is. You can also estimate the transfere rate by multiplying the block size by the IO rate.

If you then look up you disk latency, seek & transfere rates & compare with above. It gives a fairly good indicator of how well your disks are performing.

Just a thought

Tim
-