Operating System - HP-UX
1748014 Members
4471 Online
108757 Solutions
New Discussion юеВ

Re: LVM Mirroring issues when using PV Groups

 
SOLVED
Go to solution
kaushikbr
Frequent Advisor

LVM Mirroring issues when using PV Groups

Hi

We have a 2 node service guard cluster running on HP-UX 11.23. We have 2 ds2405 disk shelves with 13 disks in each shelf.

These 26 disks on the two shelves are used in one volume group as 2 PVG's of 13 disks. These PVGs are configured as mirror sets.

This volume group is a serviceguard package volume group.

The problem I have - The mirroring was ok on one machine. Since it is serviceguard cluster, I took a vg export from node1 and imported it on node2. Updated the /etc/lvmpvg file on node2.

When I moved the package and volume group from node 1 to node 2, we found that the disks in the mirror sets had disks from both PV groups.

For ex
Ideal condition
If I have disks 1,2,3 in pvg 1 mirror set 1
disks 4,5 & 6 in pvg2 & mirror set 2
After the package is moved to the alternate node
I have
disks 1,2,5 in mirror set 1 and
disks 3,4 & 6 in mirror set 2.

Not sure what is going on.

Thanks in advance for all your help.
Regards
Kaushik

========================
lvdisplay -v /dev/vgwork/lvol_work | more
--- Logical volumes ---
LV Name /dev/vgwork/lvol_work
VG Name /dev/vgwork
LV Permission read/write
LV Status available/syncd
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 1818624
Current LE 56832
Allocated PE 113664
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation PVG-strict/distributed
IO Timeout (Seconds) default

--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c4t2d0 4372 4372
/dev/dsk/c4t3d0 4372 4372
/dev/dsk/c4t4d0 4372 4372
/dev/dsk/c4t5d0 4372 4372
/dev/dsk/c4t6d0 4372 4372
/dev/dsk/c4t7d0 4372 4372
/dev/dsk/c4t8d0 4372 4372
/dev/dsk/c4t9d0 4372 4372
/dev/dsk/c4t10d0 4372 4372
/dev/dsk/c4t11d0 4371 4371
/dev/dsk/c4t12d0 4371 4371
/dev/dsk/c4t13d0 4371 4371
/dev/dsk/c4t14d0 4371 4371
/dev/dsk/c5t2d0 4372 4372
/dev/dsk/c5t3d0 4372 4372
/dev/dsk/c5t4d0 4372 4372
/dev/dsk/c5t5d0 4372 4372
/dev/dsk/c5t6d0 4372 4372
/dev/dsk/c5t7d0 4372 4372
/dev/dsk/c5t8d0 4372 4372
/dev/dsk/c5t9d0 4372 4372
/dev/dsk/c5t10d0 4372 4372
/dev/dsk/c5t11d0 4371 4371
/dev/dsk/c5t12d0 4371 4371
/dev/dsk/c5t13d0 4371 4371
/dev/dsk/c5t14d0 4371 4371

--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 /dev/dsk/c4t2d0 00000 current /dev/dsk/c5t2d0 00000 current
00001 /dev/dsk/c4t3d0 00000 current /dev/dsk/c5t3d0 00000 current
00002 /dev/dsk/c4t4d0 00000 current /dev/dsk/c5t4d0 00000 current
00003 /dev/dsk/c4t5d0 00000 current /dev/dsk/c5t5d0 00000 current
00004 /dev/dsk/c4t6d0 00000 current /dev/dsk/c5t6d0 00000 current
00005 /dev/dsk/c4t7d0 00000 current /dev/dsk/c5t7d0 00000 current
00006 /dev/dsk/c4t8d0 00000 current /dev/dsk/c5t8d0 00000 current
00007 /dev/dsk/c4t9d0 00000 current /dev/dsk/c5t9d0 00000 current
00008 /dev/dsk/c4t10d0 00000 current /dev/dsk/c5t10d0 00000 current
00009 /dev/dsk/c5t11d0 00000 current /dev/dsk/c4t11d0 00000 current
00010 /dev/dsk/c5t12d0 00000 current /dev/dsk/c4t12d0 00000 current
00011 /dev/dsk/c5t13d0 00000 current /dev/dsk/c4t13d0 00000 current
00012 /dev/dsk/c5t14d0 00000 current /dev/dsk/c4t14d0 00000 current
00013 /dev/dsk/c4t2d0 00001 current /dev/dsk/c5t2d0 00001 current
00014 /dev/dsk/c4t3d0 00001 current /dev/dsk/c5t3d0 00001 current
00015 /dev/dsk/c4t4d0 00001 current /dev/dsk/c5t4d0 00001 current
00016 /dev/dsk/c4t5d0 00001 current /dev/dsk/c5t5d0 00001 current
=======================================

vgdisplay -v vgwork
--- Volume groups ---
VG Name /dev/vgwork
VG Write Access read/write
VG Status available, exclusive
Max LV 255
Cur LV 1
Open LV 1
Max PV 64
Cur PV 26
Act PV 26
Max PE per PV 35003
VGDA 52
PE Size (Mbytes) 32
Total PE 113724
Alloc PE 113664
Free PE 60
Total PVG 2
Total Spare PVs 0
Total Spare PVs in use 0

--- Logical volumes ---
LV Name /dev/vgwork/lvol_work
LV Status available/syncd
LV Size (Mbytes) 1818624
Current LE 56832
Allocated PE 113664
Used PV 26


--- Physical volumes ---
PV Name /dev/dsk/c4t2d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t3d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t4d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t5d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t6d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t7d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t8d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t9d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t10d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t11d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t12d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t13d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c4t14d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t2d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t3d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t4d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t5d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t6d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t7d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t8d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t9d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t10d0
PV Status available
Total PE 4374
Free PE 2
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t11d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t12d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t13d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On

PV Name /dev/dsk/c5t14d0
PV Status available
Total PE 4374
Free PE 3
Autoswitch On
Proactive Polling On


--- Physical volume groups ---
PVG Name PVG2
PV Name /dev/dsk/c4t2d0
PV Name /dev/dsk/c4t3d0
PV Name /dev/dsk/c4t4d0
PV Name /dev/dsk/c4t5d0
PV Name /dev/dsk/c4t6d0
PV Name /dev/dsk/c4t7d0
PV Name /dev/dsk/c4t8d0
PV Name /dev/dsk/c4t9d0
PV Name /dev/dsk/c4t10d0
PV Name /dev/dsk/c4t11d0
PV Name /dev/dsk/c4t12d0
PV Name /dev/dsk/c4t13d0
PV Name /dev/dsk/c4t14d0

PVG Name PVG3
PV Name /dev/dsk/c5t2d0
PV Name /dev/dsk/c5t3d0
PV Name /dev/dsk/c5t4d0
PV Name /dev/dsk/c5t5d0
PV Name /dev/dsk/c5t6d0
PV Name /dev/dsk/c5t7d0
PV Name /dev/dsk/c5t8d0
PV Name /dev/dsk/c5t9d0
PV Name /dev/dsk/c5t10d0
PV Name /dev/dsk/c5t11d0
PV Name /dev/dsk/c5t12d0
PV Name /dev/dsk/c5t13d0
PV Name /dev/dsk/c5t14d0



8 REPLIES 8
Torsten.
Acclaimed Contributor

Re: LVM Mirroring issues when using PV Groups

What was the exact command to vgexport/vgimport?

However, I don't think this is a problem as long as each disk has a mirror in a different chassis.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
kaushikbr
Frequent Advisor

Re: LVM Mirroring issues when using PV Groups

Hi

Thanks for your reply.
Commands used

vgexport -v -p -s -m /var/tmp/vgwork.map vgwork

Before importing

The usual
mkdir /dev/vgwork
mknod
and
vgimport -v -s -m /var/tmp/vgwork.map vgwork

The concern is, if we lose an HBA / an entire disk shelf, how will this impact the mirror copies.

Regards
Kaushik
Ganesan R
Honored Contributor

Re: LVM Mirroring issues when using PV Groups

Hi,

It's strange thing. It suppose to show exactly as first node.

As Torsten said, there is no problem as long as each disk has a mirror in a different chassis.

If you still wish to see PE1 from one enclosure and PE2 from another enclosure, you can try this.

reduce the mirror by explicitely specifying the C5 chasis. So that the lv has only PE1 from C4 controller. After that you can extend the LV. now PE2 will be distributed on C5.
Best wishes,

Ganesh.
Torsten.
Acclaimed Contributor

Re: LVM Mirroring issues when using PV Groups

IMHO it will just continue, because you loose all mirrored copies, but you keep always 1 working disk.

Imagine this:

original ==> mirror

failed ==> ok
ok ==> failed
ok ==> failed
...

your data is ok, because 1 of each mirrored "pair" is ok.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Ganesan R
Honored Contributor

Re: LVM Mirroring issues when using PV Groups

Hi Again,

>>>
The concern is, if we lose an HBA / an entire disk shelf, how will this impact the mirror copies.
<<<

It will not impact until another controller fails. Because lvmpvg file is configured in such a way.
Assume that C5 controller failed. Still C4 enclosure/disks has one copy of data. Only thing is, lvdisplay still shows the same way.
Best wishes,

Ganesh.
kaushikbr
Frequent Advisor

Re: LVM Mirroring issues when using PV Groups

Hi Ganesan

I tried doing that, I reduced the LV only to the disks on one disk shelf, did a pv move to move the contents from the odd disk to the right one, mirrored it back again. Everything was ok. Since this is serviceguard cluster, I exported and imported the VG config on the alternate node, modifed the /etc/lvmpvg file and started the package on the alternate node. As soon as I failed over the package to the alternate node everything went back to square one. Moved the package back to the primary node, no luck.

Regards
Kaushik
Ganesan R
Honored Contributor
Solution

Re: LVM Mirroring issues when using PV Groups

Hi kaushikbr,

Sometimes back I have seen this kind of scenerio and as per HP it is not an issue at all. We may see this type of output if the lvcreate once and extened later on.

This is exact HP explanation about this issue.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
This is not a problem. Mirroring doesn't care who is the primary or secondary
disk, or how it is listed from lvdisplay -v. Strict allocation will not allow
extents to mirror to the same disk or disks in the same pvg(physical volume
group).

There are several ways to fix this:

1.) The easiest and least impacting method.

a.) lvreduce -m 0 /dev/vgXX/lvolX /dev/dsk/cxtxdx (it doesn't matter
which disk, but be sure one of them is listed)

b.) lvextend -m 1 /dev/vgXX/lvolX /dev/dsk/cxtxdx (the disk that was
reduce)

2.) This method would require file systems to be unmounted.

a.) vgchange -a n /dev/vgXX

b.) vgchange -a y /dev/vgXX

3.) reboot the system

NOTE: It does not matter which disk is first because as soon as the system
is rebooted or the vg is deactivated and then reactivated, the PV nums are used
to determined which disk will be listed first.

See doc ULVMKBQA00000381 regarding pvnums.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Hope this clarify your concern. If you have access to that document you can read it.
Best wishes,

Ganesh.
kaushikbr
Frequent Advisor

Re: LVM Mirroring issues when using PV Groups

Hi Ganesan

That is good document.

We learn something new every day.

Thank you all for valuable comments

Regards
Kaushik