Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Turning a standalone node into a two-node VMScluster with MSA1000

 
SOLVED
Go to solution
Jeremy Begg
Trusted Contributor

Turning a standalone node into a two-node VMScluster with MSA1000

Hi,

For the past couple of years I have been the system manager for a site running OpenVMS V8.2 on a single AlphaServer DS25. They recently became concerned about the business-critical nature of the OpenVMS application and have decided to implement a two-node VMScluster.

For reasons I won't go into here, they have two identical AlphaServer DS25s. Each machine has a SmartArray 5300A RAID controller and there is a total of 12 physical drives. Currently only one of these machines is in use (the other one has been shut down almost since the day they arrived).

The site has received the necessary hardware to form a two-node VMScluster with shared MSA1000 storage and I will be going there next weekend to set it up.

I'm intending to run the cluster with each DS25 having its own system disk, i.e. booting from its SA5300A controller. The application software and critical shared system files (SYSUAF, NETPROXY, etc) will go onto the MSA1000.

My only problem is that I'm a little unsure on a couple of the cluster configuration details. (I have configured and managed clusters before but always from scratch.)

One thing I'm not sure about is how to set the ALLOCLASS parameter. In the "Guidelines for OpenVMS Cluster Configurations" it says, "A Fibre Channel storage disk device name is formed by the operating system from the constant $1$DGA and a device identifier, nnnn. Note that Fibre Channel disk device names use an allocation class value of 1 whereas Fibre Channel tape device names use a value of 2".

In other words, FC devices ignore the system's ALLOCLASS value?

Currently the running DS25 has an ALLOCLASS of "1". Given that each system will have at least one "local" SCSI disk (the system disk) would I be better off changing the ALLOCLASS to a unique value on each DS25 (e.g. 2 and 3)?

When it comes to preparing the system disk for the second DS25, I see two possibilities (assuming I'm not going install VMS from scratch):

1. Use option 4 in CLUSTER_CONFIG.COM to clone the existing system disk to a scratch disk, then use option 5 to create a new system root (e.g. [SYS1]) on the cloned disk. Once this has been done the cloned disk will be moved to the second DS25.

2. Alternatively, could I just restore an image backup of the original DS25 to the second DS25 and then change the node name (in MODPARAMS.DAT, DECnet, etc)?

Thanks,
Jeremy Begg
15 REPLIES 15
Jon Pinkley
Honored Contributor
Solution

Re: Turning a standalone node into a two-node VMScluster with MSA1000

Jeremy,

Yes, FC devices ignore ALLOCLASS.

What is the compelling reason for multiple system disks? If that is what you are familiar with, that may be a sufficient reason. However, unless you cannot schedule downtime for system upgrades, etc., I see very little reason to go to the extra complexity, duplicated upgrades, etc. that come with multiple system disks.

In a two-node cluster, you are going to have to have a quorum disk on the MSA1000, and a place for the shared cluster files. It is much less complex to have a single system disk, a common SYS$COMMON, etc. That's my opinion.

You can still have your page/swap files on local devices (and even system dump files, although it can be nice to have those on a disk that can be seen by the other system, so unless you are really tight on MSA space I would also recommend that the dump files were on the MSA as well.

There are good reasons for multiple system disks, but do consider the reasons before heading down that road, especially if you haven't had experience with multiple system disks already.

Jon
it depends
Jeremy Begg
Trusted Contributor

Re: Turning a standalone node into a two-node VMScluster with MSA1000

Hi Jon,

Thanks for confirming the FC ALLOCLASS issue.

I am familiar with both common-system-disk and multiple-system-disk clusters. I agree a common system disk would be simpler to manage but going with one system disk per node lets me perform rolling O/S updates and allows a node to be booted without the MSA1000 being on-line. (OK those might not be strong reasons, but they work for me for now.)

Thanks,
Jeremy Begg
Martin Vorlaender
Honored Contributor

Re: Turning a standalone node into a two-node VMScluster with MSA1000

Jeremy,

>>>
In other words, FC devices ignore the system's ALLOCLASS value?
<<<

Yes.

>>>
Currently the running DS25 has an ALLOCLASS of "1". Given that each system will have at least one "local" SCSI disk (the system disk) would I be better off changing the ALLOCLASS to a unique value on each DS25 (e.g. 2 and 3)?
<<<

Why not choose something that won't collide with an FC tape, e.g. 3 and 4?

>>>
When it comes to preparing the system disk for the second DS25, I see two possibilities (assuming I'm not going install VMS from scratch):
<<<

I'd go with option 1. Much cleaner, as the SCSNODE gets used in lots of places (see http://labs.hoffmanlabs.com/node/589 ).

HTH,
Martin
Jeremy Begg
Trusted Contributor

Re: Turning a standalone node into a two-node VMScluster with MSA1000

Hi Martin,

Good point about host ALLOCLASS, I think I'll go with 10 and 20.

Thanks for the suggestion for disk prepartion.

Regards,
Jeremy Begg
Jan van den Ende
Honored Contributor

Re: Turning a standalone node into a two-node VMScluster with MSA1000

Jeremy,

>>>
They recently became concerned about the business-critical nature
<<<

to me, should imply (at the very least) the deployment of (host based!) Volume Shadowing.

combine with

>>>
lets me perform rolling O/S updates
<<<

... and you have practicly described a high desire FOR a single, common, system disk!
-split off a member, mount privately, change volume label, set VAXCLUSTER=0 & STARTUP_P1 = MIN.
Boot one system from this disk, and upgrade.
Reset VAXCLUSTER & STARTUP_P1 and reboot
(now a mixed-version, dusl-sysdisk cluster)
Shutdown other node, reboot from new disk, and the rolling upgrade is done.
At any point in time, if necessary roll-back is as simple as booting from (a member of) the original sysdisk. Keep this intact till satisfied with the upgrade.

(at current disk prices, I stronly suggest 3-member shadow sets. A.o., this lets your production stay shadowed while upgrading. Way easier in getting single-point-in time backups as well)

Btw, I have always felt comfortible with NOT using SYS0 as a system root in a cluster. It prevents accidental "wrong root" booting, especially by people who are not really familiar with the site, (such as maintenance, but I have also been surprised by "outside" software installers. Don't say it will not happen to your site; upper management tends to make decisions they will not discuss with you in advance. Better safe than sorry!)

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.