Operating System - HP-UX
1752589 Members
4295 Online
108788 Solutions
New Discussion юеВ

Re: lvmpvg and load balancing

 
R.O.
Esteemed Contributor

lvmpvg and load balancing

Hi,

In a volume group with pvgs (11.23), apart from alternating disk paths, does it have any effect in load balancing alternating also the paths in "/etc/lvmpvg" ?

Regards,
"When you look into an abyss, the abyss also looks into you"
4 REPLIES 4
Torsten.
Acclaimed Contributor

Re: lvmpvg and load balancing

Using physical volume groups only assures that one side of a mirror is in one group, the other side in the other group.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
James R. Ferguson
Acclaimed Contributor

Re: lvmpvg and load balancing

Hi:

In 11.23 and prior, alternate links are purely active/passive. That is, all I/O uses the primary (first link defined in '/etc/lvmtab'). Secondary links are only for high-availability should the primary fail. No load-balancing occurs.

If you add a product like EMC Powerpath, then you can have I/O among all pvpaths (alternate links) concurrently.

Again, though, as Torsten noted, all of this falls outside the scope and objective of 'etc/lvmpvg'.

Regards!

...JRF...
Bob_Vance
Esteemed Contributor

Re: lvmpvg and load balancing

Actually, PVGs have little to do with load balancing, as you described it.

To achieve that LB (on HPUX < 11.3), you must manually change the PVlink order within the Volume Group, itself, via

vgreduce/vgextend
.

As torsten says, PVG applies to mirroring.

If you are mirroring, then two (or 3) I/Os are done, and *properly* configured mirroring will make sure that the mirroring I/Os go over different HBAs (controllers).
So, in this sense, there could be load balancing.

PVGs simply make it easier to maintain "proper" mirroring when extending LVOLs and VGs.



When you mirror a LVOL, there are 3 possible policies:

1) no restriction
this actually allows you to mirror to the SAME PV !

((why would you ever do that, you ask?
Only reason I've ever thought of is that you want to
be able to split a mirror for backup or testing and
you don't have or want to waste extra disks
and . Contive, I know.
))


2) strict
mirrored extents must be on different PVs.


3) strict-PVG
mirrored extents must be on PVs within different PVGs. (Duh ;>)


The original point of PVGs was to make sure that mirroring went across two different controllers (HBAs), so that if you lost a controller, all mirrored extents would still be available on PVs on the other controller.

So, you create the PVGs in /etc/lvmpvg, making *sure* that the devices that you specify are actually on the different controllers.

Then, whenever you create a mirrored LVOL, you simply specify
-m for mirroring
& -s g for strict-by-PVG
.
And, when you extend such an LVOL's size, if you do nothing special it will automatically mirror across the controllers.

So, the existence of PVGs only made your mirroring across HBAs easier.


bv
"The lyf so short, the craft so long to lerne." - Chaucer
Ismail Azad
Esteemed Contributor

Re: lvmpvg and load balancing

Hola R.O,

Well the policies mentioned by Bob are also what is termed as *allocation* policies within the umbrella of what is termed as mirroring policies. If you look at this at another angle, there comes the whole point of addressing that single point of failure and now the subject under consideration would be an HBA or lets say a controller because if one of them fails that defeats the purpose of mirroring if the HBA or the controller were single points of failure.

As JRF mentioned the classic and now deprecated PV link normally would transfer load accross only one path which is the limitation of a PV link. However, I would be interested to know why you thought that /etc/lvmpvg would have something to do with load balancing.

Well running load accross all paths let's say to a LUN is by far the most amazing feature of 11.31. And ppl can say goodbye to EMC powerpath, securepath etc. cez now multipathing is *home* to HPUX in 11iv3.

Regards
Ismail Azad
Read, read and read... Then read again until you read "between the lines".....