- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Migrating from Quorum Disk to Quorum Node
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 04:51 AM
тАО04-20-2006 04:51 AM
The steps I was planning on taking for the transition from quorum disk to quorum node are the following:
1. Change the modparams files for the three ES45 systems to indicate DISK_QUORUM = "" and QDSKVOTES = 0, leaving EXPECTED_VOTES at 5. Run Autogen to set the parameters.
2. Copy CLUSTER_AUTHORIZE.DAT from the ES45 system disk to SYS$COMMON:[SYSEXE] on the DS10.
3. Change EXPECTED_VOTES on the DS10 to 5 and VOTES to 2.
4. Shut down all systems.
5. Boot the DS10, expect it to hang waiting for one additional vote.
6. Boot one of the ES45 systems, and there should be a working cluster.
7. Boot the second and third ES45.
Am I missing anything in this plan?
What problems or issues might I expect to encounter?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 05:02 AM
тАО04-20-2006 05:02 AM
Re: Migrating from Quorum Disk to Quorum Node
> [...] we want the cluster to survive with
> only one system [...]
So, in the new scheme, where's the quorum if
the DS10 and one ES45 go down?
What makes the DS10 better (more reliable?)
than the quorum disk?
I don't see why you'd do it, but your
procedure looks plausible.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 05:09 AM
тАО04-20-2006 05:09 AM
Re: Migrating from Quorum Disk to Quorum Node
"Using an internal disk on the DS-10" does this mean a single scsi disk? For a quorum node, you may want to consider some form of redundancy on the system disk disk, either with shadowing or with hardware based raid.
Your plan looks good. Another, less immediate option, is to have Availablity Manager/AMDS running on the clustered nodes. You can force quorum to be recalculated on the fly. The downside of course being that this requires manual intervention, the cluster won't continue. If you outages on multiple nodes, it can be a useful tool.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 05:11 AM
тАО04-20-2006 05:11 AM
Re: Migrating from Quorum Disk to Quorum Node
Its gone. Works by design ;-)
Learn about AV/AMDS/IPC to recover the cluster
> What makes the DS10 better (more reliable?) than the quorum disk?
A quorum disk requires multiple watchers that are directly connected to it. Last time I fiddled with it, it did not work nice on a shared parallel SCSI bus.
Too many I/Os to the disk cause failures, too.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 05:23 AM
тАО04-20-2006 05:23 AM
Re: Migrating from Quorum Disk to Quorum Node
think over Steven's remark twice (or maybe 3 or 4 times).
I can only think of ONE reason: your "production" systems are running some app(s) that are so flakey that THEY regularly crash your systems. (which in itself would be reason for redesign, but I also know situations where that deep desire is not an option).
Other than that, an uneven number of nodes with equal votes is the most stable config (nice mathematics excersise to prove that!)
A quorum node is really only a real gain if you have two active nodes (it has SOME advantage over a quorum disk), or, and most specifically, if you have 2 active SITES (providing the quorum node is at a third site).
If your active sites have more than one node, there are good reasons to have equal total votes PER SITE, spreading them evenly within each site. (Just use high-enough values per node to reach that condition).
Proost.
Have one on me (maybe in May in Nashua?)
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 05:23 AM
тАО04-20-2006 05:23 AM
Re: Migrating from Quorum Disk to Quorum Node
That's "it's", of course. I'm just trying
to see how the new scheme satisfies the
stated requirement.
> Learn about AV/AMDS/IPC to recover the cluster
I've done this. ("D/I 14 C" is stuck in my
head for some reason.) As above, I fail to
see how the new scheme satisfies the stated
requirement.
If manual intervention is allowed, who needs
more than the "the three ES45 systems"
(without even the quorum disk)?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 06:08 AM
тАО04-20-2006 06:08 AM
Re: Migrating from Quorum Disk to Quorum Node
Quorum disks are fine as long as they do not fail. I know it is not supposed to happen in the theory, but in my experience (and I have worked with clusters since the field test of VAX/VMS V4 in 1984) the typical scenario when a quorum disk fails is that the cluster becomes hung, and cannot be recovered without a complete cluster reboot. Not always, but far too often to overlook that possibility as a very likely and even expected outcome.
I don't expect that the DS10 will have a better MTBF than the quorum disk, but replacing the quorum disk with the DS10 will yield better performance, and I suspect a better recovery scenario when the DS10 fails than what has been experienced when a quorum disk fails.
Regarding the question about what happens when the DS10 AND an ES45 fail? In our current configuration, what happens when the quorum disk AND an ES45 fail at the same time? I don't see a real difference between those two scenarios.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 06:13 AM
тАО04-20-2006 06:13 AM
Re: Migrating from Quorum Disk to Quorum Node
Agreed, when I played with this, cluster state transitions went _much_ faster - even with a reduced quorum disk polling interval.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 06:16 AM
тАО04-20-2006 06:16 AM
Solution
Because we want the cluster to survive ..
... if that implies you want your cluster to also survive this operation, you just have to change the order of your actions somewhat:
1.
2.
3.
extra action: dismount QSK clusterwide.
5. (without the hang!)
4,6,7 hybrid: reboot the ES45's - one at a time.
Result: cluster still running, Qdsk replaced by DS10.
At no time any danger of split cluster.
Only between Extra and 5. running on the verge of quorum (an unexpected node leaving causes cluster hang, cancelled by DS10 joining).
--- just the thought experiment by someone who HAS replaced hardware while keeping the cluster available --
Proost.
Have one on me (maybe in May in Nashua?)
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2006 06:17 AM
тАО04-20-2006 06:17 AM
Re: Migrating from Quorum Disk to Quorum Node
We typically have scheduled outages for various things every 2-3 months, and unscheduled individual node outages are running about 1 per quarter since we moved the systems to a new data center a year ago. Prior to that, we had system hardware failures about 2 or 3 times per year. Most failures are memory problems, but 4 or 5 per year does not really make a strong trend.