Operating System - HP-UX
1848043 Members
2394 Online
104022 Solutions
New Discussion

BCVs, vgchgid and restore

 
lallemand_3
Occasional Advisor

BCVs, vgchgid and restore

hi all,

I have a MCSG OPS with three nodes which are sharing (vgchange -s) database VGs.
These VGs are mirrored with BC. The problem is that I want to mount the BCVs on one node.

So, I get my BC devices and import them in new VGs with the map file (this work perfectly in standalone configuration).
In cluster mode, the vgchange -a s hangs due to the same VGID and I saw that I have to use vgchgid.
Ok no problem for that but what in case of (after doing a vgchgid on the BCs device) a pairesync -restore ?

Primary volumes VGID are suddendly corrupted, isn't it ?
Is there an easy way to automate that ?

thanks in advance,

regards,
7 REPLIES 7
Simon Hargrave
Honored Contributor

Re: BCVs, vgchgid and restore

You say the vgchange -a s hangs due to the same VGID, are you sure you actually split the BCV's before you tried to do anything with them (ie their status is SSUS?) Usually a vgchange etc hangs with you try to work with a secondary disk in a pair that is actively mirroring.
lallemand_3
Occasional Advisor

Re: BCVs, vgchgid and restore

Yes Simon, the status is really SUS on my volumes.
Simon Hargrave
Honored Contributor

Re: BCVs, vgchgid and restore

Okay, after you split off the pairs you execute vgchid:

vgchgid /dev/dsk/cXtXdX /dev/dsk/cYtYdY /dev/cXtXdX

Specify ALL the disks for the backup volume group at once. The do your mknod, vgimport etc. You should then be able to vgchange the "new" volume group, as all its disks are different to the primary. Mount and use as required, then unmount, vgchange -a n, vgexport, and finally pairresync. The resync will copy the primary back to the secondary keeping your mirror up to date.

The only way the primarys will get corrupted is if you perform your pairresync the wrong way around. (it helps to be very careful with your HORCMINST=0 environment variable before executing this!)
lallemand_3
Occasional Advisor

Re: BCVs, vgchgid and restore

Okay but what I want to know is what happens if I do a pairresync -restore (so, Re_sync from S-VOL to P-VOL).
In this case, lvmtab is not coherent with VGID ...
The only way seems to be a vgcfgrestore ? this seems to be a really low end solution and to be quite dangerous especially when it's late and there is a crash (it's always when there is a crash that you have to do this)... I think
Geoff Wild
Honored Contributor

Re: BCVs, vgchgid and restore

IMHO - I would NOT mount bcv's on any node in a cluster...the nodes should even have access to the BCV's....

I would (and do) have another server outside the cluster have access and mount them there.

Main reason - when you do a vgimport - it is going to use all disks it sees with the same vgid - including the BCV'S!

Repeat - don't allow node access to the BCV's!

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Simon Hargrave
Honored Contributor

Re: BCVs, vgchgid and restore

If you are restoring from S-VOL to P-VOL then yes you would need to either use vgcfgrestore or vgexport then vgimport (on all nodes).

However what scenario would you require to do this? If you split purely to provide a point-in-time recovery whilst you perform work on the primary system then I would not mount the S-VOL's. If you need to mount the S-VOL's then you would write these to tape, then restore onto the S-VOL's in the case of a disaster.

Maybe I'm missing the point you are trying to achieve and using Business Copy in a unique way?
lallemand_3
Occasional Advisor

Re: BCVs, vgchgid and restore

> Maybe I'm missing the point you are trying to achieve and using Business Copy in a unique way?

In fact, Oracle DBAs want to view BC's raw devices because they would like to restore just a portion of the database in case of a disaster.
If I mount the BCVs on a different server, restoration of a raw device could take some times due to network and the quite large raw devices we have. If I'm working on the same server, time's really reduce.