HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

added new devices to vgs now the snap jobs don't work

 
SOLVED
Go to solution

added new devices to vgs now the snap jobs don't work

I've been pounding away at this all day and I'm nothing if not confused.

I added 2 devs to vg05 and 1 dev to vg06. The devices in those vgs get snapped and mounted on the same system as vg21 and vg22 respectively.

I've ensured that the snap source/clones are correct, all 3 devs are differents sizes, and the snap would fail if the devs were different sizes.

I've added the source devs to vg05 and vg06 and created the new Raw lvols.

NOTE: vg04 and vg07 are also part of this process but no new devices were added. Everything still works for them.

My vgexport command succeeds:

vgexport -p -v -m /tmp/vg05.map /dev/vg05

creating the vg directories works fine.

vgchgid works fine and the three new target devs all succeed on this command.

the vgimport command fails for vg21 and vg22:

/usr/local/sbin# vgimport -m /tmp/scripts/vg05.map -f /usr/local/sbin/vg21.infile /dev/vg21
vgimport: The Physical Volumes specified on the command line
do not belong to the same Volume Group.

/usr/local/sbin/vg21.infile contains:
/dev/dsk/c49t3d6
/dev/dsk/c54t3d6
/dev/dsk/c55t3d6
/dev/dsk/c49t9d5
/dev/dsk/c54t9d5
/dev/dsk/c55t9d5
/dev/dsk/c49t9d7
/dev/dsk/c54t9d7
/dev/dsk/c55t9d7

which are correct.

I'm confused about the result of the vgimport:

"vgimport: The Physical Volumes specified on the command line
do not belong to the same Volume Group."

Since vg21 doesn't exist at this time how can it "not belong"

I feel like I'm missing something simple here.

Any help would be appreciated.
4 REPLIES
smatador
Honored Contributor

Re: added new devices to vgs now the snap jobs don't work

Hi,
I think you have to check the vgid of each disk. Look some threads about reading vgid for example
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=523849
and be sure to have the same vgid on each pv.
If it's not the case, well the vgimport could not succeed.
When you do the vgchdid you write the "3 news target devs". Try to check first the vgid of this news target.
Hope it helps
Solution

Re: added new devices to vgs now the snap jobs don't work

when you issued the vgchgid, did you do it for *all* the disks in a single invocation?

-example - I have a VG with 2 disks (lets ignore alternate links for this example):

c6t0d0
c6t0d1

Now I add another disk to the VG, which is c6t0d2

I go through my vgexport and snapclone process, and when I get to the vgchgid bit, the *only* correct invocation of vgchgid is:

vgchgid /dev/rdsk/c6t0d0 /dev/rdsk/c6t0d1 /dev/rdsk/c6t0d2

Anything else is incorrect, so for example if I did this:

vgchgid /dev/rdsk/c6t0d0
vgchgid /dev/rdsk/c6t0d1
vgchgid /dev/rdsk/c6t0d2

I would get the errors you describe, also if I only did the vgchgid on the new disk:

vgchgid /dev/rdsk/c6t0d2

but didn't run it for the others, I'd get the errors you describe...

vgchgid is writing a new VGID into the VG reserved area on the disk. IIRC this is made up of the machine identification number (i.e. the output of uname -i) concatenated with the time expressed in secoonds past the epoch (1/1/70). So if you just run it for 1 disk in a VG, or run it multiple times for seperate disks in a VG, you'll get a different VGID written into the VGRA on each disk - hence the problem you see.

HTH

Duncan

HTH

Duncan

Re: added new devices to vgs now the snap jobs don't work

Thank you both for the responses... that was my exact mistake...

Can't tell you how often I looked over that job :)

Re: added new devices to vgs now the snap jobs don't work

The above answers were exactly the fix I needed. Thanks.