Operating System - HP-UX
1753483 Members
4180 Online
108794 Solutions
New Discussion юеВ

MC Service|Guard with mirror

 
SOLVED
Go to solution
G.Carslow
Advisor

MC Service|Guard with mirror

HI,

We run a ServiceGuard cluster ( 10 node ) with VA7400 disk array(s). We are currently setting up mirrors between 2 VA7400's. One of the tests brought out the need to change the vgchange command in the package startup script from:
vgchange -a e vgXX to vgchange -a e -q n vgXX. The man page on vgchange does not recommend doing this due to risk of data corruption?
I would appreciate any comments on this change and on the real risks ( if any ) of data corruption.

Thank you
Greg
6 REPLIES 6
Geoff Wild
Honored Contributor

Re: MC Service|Guard with mirror

I can't comment on the -q n, but as far as your configuration goes, in some of the SAN migrations I have done, I mirrored across frames without data corruption, online and then removed mirror to old frame.

I would put your question to HP Support and see what they have to say.

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Rita C Workman
Honored Contributor

Re: MC Service|Guard with mirror

Never used this option...and since that command is saying turn quorum off, which like you say, is not recommended, I wonder why it wants it off. Obviously by turning this off...you are allowing the risk ..

But...are you setting up additional mirrors in case a disk goes bad ? Sort of close distance DR ? Or are you looking to mirror and then split off the other VA7400 to do your backups off of ?
Guess what I'm saying is...maybe what your trying to 'cover for' is included in a product that would handle the additional mirrors properly, like HP's Continuace Access.

Just a thought,
rcw
Armin Kunaschik
Esteemed Contributor
Solution

Re: MC Service|Guard with mirror

vgchange -q n does not check the quorum of the VG (number
of available disk = number of formerly defined disks) and
activates the VG even if not all disks are available.

vgchange -q n makes sense if one of your VA's crashed.
If one VA crashes you can not switch the packages anymore
because you don't have all disks online (you need all online
because of -a e).
-a e -q n enables you to start the package with only one "side"
of the mirror available.

On the other hand it can corrupt your data if
more disks (in addition to one VA box) are unavailable.
It can also crash the FS if the zoning of your disks changes
accidentially.
There are many scenarios where a data corruption is possible.

Therefore it is save to use "-a e" and in case of an emergency
use "-a e -q n"
After the other box returns you have to sync
the mirror. If you forget that it will sync
at the next package start in foreground(!!!) This usually
takes hours to complete.
"-a e -q n -s" activates the VG and syncs the mirror in background.
But use this with care! There is the risk to
sync old data from the wrong side mirror.

Regards,
Armin
And now for something completely different...
G.Carslow
Advisor

Re: MC Service|Guard with mirror

Hi,

The reason for having mirrored VA's is the situation where a VA fails. The packages cant fail without the -q n option. I was thinking of puting the vgchange in an if statement and check disk availability in pkg startup script. Other than this a manual process as VA should not fail often.

Does anyone have a good link to the sync rules for mirrors?

Thanks for the help so far..Cheers...Greg
Armin Kunaschik
Esteemed Contributor

Re: MC Service|Guard with mirror

I don't know such a document...
but have you tried searching this forum
and the ITRC Knowledge base for the buzzwords
mirror, quorum, sync etc?

Armin
And now for something completely different...
Greg OBarr
Regular Advisor

Re: MC Service|Guard with mirror

Personally, I wouldn't use the "-q n" option. If your package fails over under normal curcumstances (some problem with the CPU, not the disks), then they're both going to be available.

There are unique considerations when using a mirror. For example, your Mirror Scheduling Policy is important here. If you're using Parallel, and one of your mirrors fails at the same time the CPU fails, you could be left with partially completed writes on the remaining array. On the other hand, if your policy is Sequential, and the primary array fails at the same time as the CPU fails, then you could also have some partially completed writes on the second array.

I would use MC/SG to handle failover under "normal" circumstances. If you have both a CPU and mirror failure, it is best to take some time to evaluate the situation and analyze the data (especially if you're running a database).

It's easy to make a bad situation a lot worse. I can tell you from personal experience that replacing hardware is a lot easier and results in much less down time than recovering from data corruption.

If this isn't acceptable in your environment, then scrap the LVM mirroring alltogether and use hardware mirroring.

That's my $.02

-greg