- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- MC Service|Guard with mirror
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-20-2004 06:25 PM
тАО07-20-2004 06:25 PM
We run a ServiceGuard cluster ( 10 node ) with VA7400 disk array(s). We are currently setting up mirrors between 2 VA7400's. One of the tests brought out the need to change the vgchange command in the package startup script from:
vgchange -a e vgXX to vgchange -a e -q n vgXX. The man page on vgchange does not recommend doing this due to risk of data corruption?
I would appreciate any comments on this change and on the real risks ( if any ) of data corruption.
Thank you
Greg
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-21-2004 12:56 AM
тАО07-21-2004 12:56 AM
Re: MC Service|Guard with mirror
I would put your question to HP Support and see what they have to say.
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-21-2004 02:10 AM
тАО07-21-2004 02:10 AM
Re: MC Service|Guard with mirror
But...are you setting up additional mirrors in case a disk goes bad ? Sort of close distance DR ? Or are you looking to mirror and then split off the other VA7400 to do your backups off of ?
Guess what I'm saying is...maybe what your trying to 'cover for' is included in a product that would handle the additional mirrors properly, like HP's Continuace Access.
Just a thought,
rcw
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-21-2004 08:46 PM
тАО07-21-2004 08:46 PM
Solutionof available disk = number of formerly defined disks) and
activates the VG even if not all disks are available.
vgchange -q n makes sense if one of your VA's crashed.
If one VA crashes you can not switch the packages anymore
because you don't have all disks online (you need all online
because of -a e).
-a e -q n enables you to start the package with only one "side"
of the mirror available.
On the other hand it can corrupt your data if
more disks (in addition to one VA box) are unavailable.
It can also crash the FS if the zoning of your disks changes
accidentially.
There are many scenarios where a data corruption is possible.
Therefore it is save to use "-a e" and in case of an emergency
use "-a e -q n"
After the other box returns you have to sync
the mirror. If you forget that it will sync
at the next package start in foreground(!!!) This usually
takes hours to complete.
"-a e -q n -s" activates the VG and syncs the mirror in background.
But use this with care! There is the risk to
sync old data from the wrong side mirror.
Regards,
Armin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-21-2004 09:44 PM
тАО07-21-2004 09:44 PM
Re: MC Service|Guard with mirror
The reason for having mirrored VA's is the situation where a VA fails. The packages cant fail without the -q n option. I was thinking of puting the vgchange in an if statement and check disk availability in pkg startup script. Other than this a manual process as VA should not fail often.
Does anyone have a good link to the sync rules for mirrors?
Thanks for the help so far..Cheers...Greg
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-22-2004 10:43 PM
тАО07-22-2004 10:43 PM
Re: MC Service|Guard with mirror
but have you tried searching this forum
and the ITRC Knowledge base for the buzzwords
mirror, quorum, sync etc?
Armin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-23-2004 05:25 AM
тАО07-23-2004 05:25 AM
Re: MC Service|Guard with mirror
There are unique considerations when using a mirror. For example, your Mirror Scheduling Policy is important here. If you're using Parallel, and one of your mirrors fails at the same time the CPU fails, you could be left with partially completed writes on the remaining array. On the other hand, if your policy is Sequential, and the primary array fails at the same time as the CPU fails, then you could also have some partially completed writes on the second array.
I would use MC/SG to handle failover under "normal" circumstances. If you have both a CPU and mirror failure, it is best to take some time to evaluate the situation and analyze the data (especially if you're running a database).
It's easy to make a bad situation a lot worse. I can tell you from personal experience that replacing hardware is a lot easier and results in much less down time than recovering from data corruption.
If this isn't acceptable in your environment, then scrap the LVM mirroring alltogether and use hardware mirroring.
That's my $.02
-greg