1748159 Members
4104 Online
108758 Solutions
New Discussion юеВ

Re: New Node

 
SOLVED
Go to solution
BRUCE BROWN_2
Advisor

New Node

Hello Folks

Currently I have two es47 nodes attached to
a EVA8000 San using $1$DGA1 as the OS disk for
both nodes. The nodes are configured as Cluster
using VMS 7.3.2. The Quorum is maintained
using a Quorum disk on the EVA 8000.

I want to add a Third node to the cluster, a ES45. Reviewing HP documents and forums
I find myself asking more questions then finding asnswers;

The correct usage of cluster_config?
Presenting the new node correctly to the San
system disk?
The correct procedures for removing the qourm disk as we will now have 3 disks.

Any advice?

Thank You

Bruce

Bruce M. Brown
Millennium Systems Engineer
Clinical Information System Project
bmbrown@ihis.org
PH 902-368-5031


7 REPLIES 7
Hoff
Honored Contributor

Re: New Node

What you need to do here depends on your particular requirements. There's no single right answer here.

Here's an intro to the area that might help:

http://64.223.189.234/node/169

Whether or not you keep the quorum disk depends on several factors, not the least of which is whether you need survive the loss of two nodes from your cluster. If you want that, keep the quorum disk, and set its votes to 2, VOTES on each node to 1, and EXPECTED_VOTES to (2+1+1+1=5).

As for removing the quorum disk should you go that way, shut down the cluster, reset the quorum disk (DISK_QUORUM) and quorum disk votes (QDSKVOTES) system parameters to blank and zero respectively, and reset the cluster votes and expected votes (via conversational bootstrap) to the end-state values, CONTINUE the bootstrap, and off you go. Once booted, clean up the old settings in MODPARAMS.DAT, too. Details and such are here:

http://64.223.189.234/node/153
http://64.223.189.234/node/204

If you're using cluster_config or cluster_config_lan, you can go that route to deal with the quorum disk, too. Adding a system disk boot root (adding a third system to the existing system disk) is easiest with cluster_config or cluster_config_lan.

IIRC, somewhere V8 or so had cluster_config and cluster_config_lan finally merged; where the tools did the right thing, rather than depending on you to remember if you were using DECnet Phase IV or DECnet-Plus MOP, or (as I usually recommend) LANCP and its MOP; use of LANCP is underneath cluster_config_lan.

And when the dust settles and you clean up MODPARAMS.DAT, you'll want to AUTOGEN with FEEDBACK to reset the system parameters appropriately. Adding a third node usually isn't a big deal in this regard, but then I've seen some pretty crufty MODPARAMS files and some configurations that haven't seen an AUTOGEN since original installation.

Stephen Hoffman
HoffmanLabs LLC
The Brit
Honored Contributor
Solution

Re: New Node

I assume that you have refered to

http://h71000.www7.hp.com/DOC/731final/4477/4477pro_013.html

See Chapter 8 -> configuring cluster system.

run cluster_config, select option 1 to add new node to cluster.

for the quorum disk select option 3 --> Change
then select the appropriate option to disable the quorum disk.

(I'm not sure whether this needs to be done on all nodes)

I believe it will take effect at the next boot.

As far as presenting the new node to the SAN, it is really a case of presenting the SAN to the new node.

Make the physical connections from the new node to the SAN Fabric, (be sure that the connections are in the correct zones, if applicable).
On the EVA8000 add the new host.
The new HBA's should have been detected automatically. Assign the new HBA ports to the new host.
Assign the appropriate luns so that they are presented to the new host.

On the host at the console, run wwidmgr
do
show wwid -> Check that you can see your luns
run quickset -> Set up your boot disk
exit wwidmgr
init
set the correct values for
bootdef_dev
boot_osflags

boot!!
BRUCE BROWN_2
Advisor

Re: New Node

Thank You for the wonderful information.

One more. How much free space will I require on the system disk. I am adding a node with
32GB of Ram.

Regards

Bruce
The Brit
Honored Contributor

Re: New Node

Not all that much. Almost all code requirements are already present in the Sys$Common directory structures. You only have to be concerned about what will be in the sys$sysroot directories, i.e. Pagefiles, Swapfiles, Dumpfiles, etc.

Just note, when you run cluster config, and it asks if a separate disk will be used for paging and swapfiles, SAY YES, (even if its not true). The procedure will then generate only minimum page and swap files in the new root. You can adjust them later, or put them on other disks if necessary.

As far as the Memory size is concerned, the only effect this has on sys disk space requirements, will be the size of the dumpfile. This itself will depend on the value of the sys parameter DUMPSTYLE. You probably want to consider moving the dump off the system disk (to allow a full dump without using up system disk space.

Dave.
Jon Pinkley
Honored Contributor

Re: New Node

Bruce,

Your phrasing of the question implies you are asking about page/swap and dump files.

It really depends on what is using your 32GB of memory. You may not need much of a page file, but only by creating one and looking at the usage after a week of uptime will you really know.

If you use compressed selective dump, (SYSGEN parameter DUMPSTYLE 9 or 13 if DOSD (Dump Off System Disk) is used) you can get by with a much smaller dump file. 1GB or even 500MB will probably be sufficient, but Volker would be the most knowledgeable resource here concerning dump file sizing. It depends how the memory is being used. If most of it is XFC cache or on the free list, it won't contribute much to the size of the dump.

The above files are static; once they are created they won't grow (unless you allow Autogen to keep trying to adjust them, I have those set to 0 in my modparams.dat so autogen doesn't touch them.)

If your operator.log, accounting and auditing files are in their default location, then these are the variable files that will probably contribute most to the increase in space requirements. You may want these moved off the system drive anyway, preferably to a disk that has a larger cluster factor than your system device, as these files tend to get large unless new versions are created frequently.

Jon
it depends
comarow
Trusted Contributor

Re: New Node

The simple fact is, simply adding
a node will do almost everything
automatically running cluster config if
you are sharing a system disk.

It will add the root, the basic files
run autogen.


If not, there are a lot of specifics
you need to answer.

You should take control of the voting
scheme. I recommend giving the node
0 votes until the node is stable.
That way it won't effect quorum.

The way to do that is after running
cluster config, go into it's new
root, and edit the new modparams.dat
and change votes to 0 for that root.

That will speed up cluster transitions
and testing.

Robert Gezelter
Honored Contributor

Re: New Node

Bruce,

I wholeheartedly second the comment about "most of the executables are in SYS$COMMON". In fact, I often create additional system roots pre-configured for expected scenarios (same Node IDs, but different altered parameters to deal with various contingencies). It avoids the need to manually change startup settings.

On a second front, I generally recommend moving the page file (and other potentially active files) to a private disk per node. A small initial page file is a good thing to have for emergencies, but the larger file should be elsewhere (as should the place where scratch files are located).

- Bob Gezelter, http://www.rlgsc.com