1748185 Members
4413 Online
108759 Solutions
New Discussion юеВ

Re: Clustering Problem.

 
SOLVED
Go to solution
The Brit
Honored Contributor

Clustering Problem.

I am trying to add a second blade to a bl860c cluster (8.3-1H1).
The first blade has been up and running for several years as a single node cluster.

I ran CLUSTER_CONFIG on the existing node and entered all of the information for the new blade/node. The new root was created and all looked OK at this point. The boot was to be from SYS1 on the common cluster system disk. The new blade hade 1 vote and Expected_Votes set to two.

Before booting the new blade, I enabled OPCOM for (CLUSTER, NETWORK), and entered "show cluster /cont"

now I booted the new node. This is the console output;

Loading.: DGA370 (TAB Clust) FGB0.5001-4380-025C-52C8
Starting: DGA370 (TAB Clust) FGB0.5001-4380-025C-52C8

PGQBT-I-INIT-UNIT, IPB, PCI device ID 0x2432, FW 4.00.90
PGQBT-I-BUILT, version X-30A3, built on Nov 12 2010 @ 15:25:18
PGQBT-S-SERDES, status 0x0001, mbxsts 0x4000, 1g 0x0400, 2g 0x1D00, 4g 0x2500
PGQBT-I-LINK_WAIT, waiting for link to come up
PGQBT-I-TOPO_WAIT, waiting for topology ID
%SYSBOOT-W-NOERLDUMP, Unable to locate SYS$ERRLOG.DMP


HP OpenVMS Industry Standard 64 Operating System, Version V8.3-1H1
┬й Copyright 1976-2009 Hewlett-Packard Development Company, L.P.


PGQBT-I-INIT-UNIT, boot driver, PCI device ID 0x2432, FW 4.00.90
PGQBT-I-BUILT, version X-30A3, built on Nov 12 2010 @ 15:25:18
PGQBT-S-SERDES, status 0x0001, mbxsts 0x4000, 1g 0x0400, 2g 0x1D00, 4g 0x2500
PGQBT-I-LINK_WAIT, waiting for link to come up
PGQBT-I-TOPO_WAIT, waiting for topology ID
%DECnet-I-LOADED, network base image loaded, version = 05.16.00

%DECnet-W-NOOPEN, could not open SYS$SYSROOT:[SYSEXE]NET$CONFIG.DAT

%SMP-I-CPUTRN, CPU #1 has joined the active set.
%SYSINIT-I- waiting to form or join an OpenVMS Cluster
%VMScluster-I-LOADSECDB, loading the cluster security database
%EWA0, Auto-negotiation mode assumed set by console
%EWA0, BladeLOM located in 64-bit, 133-mhz PCI-X slot
%EWA0, Device type is BCM5704S (Fiber) Rev B0 (21000000)
%EWB0, Auto-negotiation mode assumed set by console
%EWB0, BladeLOM located in 64-bit, 133-mhz PCI-X slot
%EWB0, Device type is BCM5704S (Fiber) Rev B0 (21000000)
%EWC0, Auto-negotiation mode assumed set by console
%EWC0, BladeLOM located in 64-bit, 66-mhz PCI-X slot
%EWC0, Device type is BCM5704S (Fiber) Rev B0 (21000000)
%EWD0, Auto-negotiation mode assumed set by console
%EWD0, BladeLOM located in 64-bit, 66-mhz PCI-X slot

at this point it hangs. I waited for 5-10 minutes to be sure that it wasnt just being slow. I assume that the reason it didn't time out was because this was the first time booting into the cluster.

Normally, the next console output would be a series of "Link Up" messages for the Network devices, followed by the "Now a cluster node" message. However I never reach this point.

On the original node I see NO messages indicating any connection or membership requests.

I also tried shutting down the original node and booting the new blade from sys1 with 1 vote and expected_votes set to 1. Same result.

In the past I have had issues with bl860c blades when connected to Flex10 modules, which required autonegotiation to be disabled in LANCP, however I need the system to boot up so that I can login and make that change.

Normally, a blade would boot up even if the Links were down, but in the past the nodes involved were established cluster members, and so the membership request would time-out and the node would come up as the first cluster node.

I am assuming that my current problem is that the new blade is NOT an established cluster member yet.

Right now I am considering

1. shutdown original node.
2. do a Conversational boot with new blade and turn off clustering. i.e. boot standalone from root sys1.

I assume at this point that autogen will run and a reboot will occur.

3. after the reboot, conversational if necessary, then take care of the Network devices.

4. Turn clustering back on and shutdown.

Can anyone make any other suggestions.

Dave.
6 REPLIES 6
Bob Blunt
Respected Contributor

Re: Clustering Problem.

Dave, at this point when adding a node (I feel) most people have their own tests they pull out. Personally when I see this I first try to stop the boot on the 2nd node and boot (sorry for the Alpha-ese here) with -FL (1,30000) to get all the diagnostics. See where after the last salient message to the console that you've seen with normal boot flags it stops. I would suspect that you'll see your system stop trying to load some driver or another. Usually in my work in the lab I've seen this sort of problem go one of three ways.

First, the basic system parameters may not have been sufficient to get the system to boot. I've tried booting conversational and issuing a "use default" at the SYSBOOT> prompt, or when I know there's a specific shortcoming I might hand-tweak specific parameters or (when booting a node that HAD been running in the cluster before) "USE" an older parameter file.

Second, when I know that I've loaded drivers or patches and I see the system stop when trying to load one of them I'll check on the working system to see if there's some ownership or other problem with that file. On blade systems I would *hope* that the potential for vast differences in configuration between blades is *less* likely so I'd think that the potential for something like an ES47 vs DS20 mismatch *should* be improbable.

Third, when I'm in a rush the first time attempting a boot in the LAB (never as likely to do this in production) I might try to boot first with a lower setting for expected votes. I know that this is NOT recommended and should only try this in a situation where I already know that the data and system integrity I risk is not of major importance. In several cases I've been forced to resort to this when trying to bring up a cluster using a quorum disk the first time in order to get QUORUM.DAT created the first time.

I'd be leaning toward the conversational boot to tweak parameters, though. With blades I'd hope that configurations are so similar that you might read in the parameter file from the other, working, blade and make changes to SCSSYSTEMID and SCSNODE on the fly to prevent conflicts. This should be early enough in the setup process that other important setup factors may have not been completed yet (DECnet and IP databases not finalized, startup command procs not in place).

Of course there's also the possibility that you may have a command procedure that has caused your problem while, for instance, it waits for some device or resource to come on-line that isn't ready or capable yet. Using the full diagnostics flags in the boot command won't be as helpful because there's too much data. For those sort of problems I eliminate the diagnostic boot flags, boot conversational and at SYSBOOT> SET STARTUP_P2 "CCC" and continue. That *should* set the "verbose" flag in the startup driver command procedures and it should list for you each command procedure it activates as the system starts. That way you could see which command procedure "froze" and look further at what it does.

bob
The Brit
Honored Contributor

Re: Clustering Problem.

Thanks for your response Bob,

I finally figured it out, (I was actually quite close in my initial description). And I offer this description as a solution for anyone facing the same problem.
---------------------------------------------
The Problem.

1. When OpenVMS 8.3-1H1 is installed on a BL860c blade, the onboard NICs are configured with AUTONEGOTIATE as the default setting.

2. If the BladeSystem Ethernet Interconnects in Bays 1 and 2 are Flex10 VC modules, the uplinks between the blade and the VC module will not come up until AUTONEGOTIATE is disabled.
-------------------------------------------
This is a serious issue, particularly when building a new cluster since a new node can not communicate with the existing cluster without a significant amount of additional work.

It could of course, be simplified if there was a way to preset the Network devices to "NOAUTONEGOTIATE" as part of Cluster_Config. Or maybe default them to "NOAUTONEGOTIATE".

Anyway, the bottom line is that the new node cannot join the cluster while the network devices are set to "AUTONEGOTIATE" because the membership request cannot reach the existing cluster members, (network links are down).

To resolve this, it is necessary to boot the new node standalone, login, and disable AUTONEGOTIATE on all Ethernet devices in LANCP.

What I had forgotten was that on the first boot, the new node has votes set to zero, so even trying to boot standalone was doomed to failure.

Anyway, I shutdown the original node to avoid any issues, and did a conversational boot of the new node into the new root (SYS1)

I realized the problem when I booted conversational. At SYSBOOT I saw that VOTES = 0, so I set it up to 1, same as EXPECTED_VOTES.

This time I was (almost) able to boot standalone. Unfortunately during the initial run of NET$CONFIGURE (during initialization) it got stuck in the DECdts startup, with recurring messages about timezone not being specified (see attachment), requiring me to HALT the system.

I rebooted the original node and edited [SYS1.SYSEXE]STARTUP1.COM, commenting out the execution of the NET$CONFIGURE "AUTOCONFIGURE" command.

Now I was able to boot the new node standalone, and reconfigure the network devices.

Once the Network devices were configured, the Links came up immediately. I manually ran Net$configure without any problem and set up TCPIP Services.

I set Votes = 1 and Expected_Votes = 2, and shutdown.

Reboot original node, and then booted new node. The new node joined the cluster without any problem.

If anyone knows a simple solution to this problem, I would appreciate hearing about it.

Dave.
The Brit
Honored Contributor

Re: Clustering Problem.

This is the attachment for the previous entry.

Dave.
Colin Butcher
Esteemed Contributor
Solution

Re: Clustering Problem.

There should be a way to use LANCP on the existing node to edit the LAN device database in the system root of the new node, once you've created the directory structure for the new node with CLUSTER_CONFIG.

If it works, then maybe that should become part of the actions performed automatically for you by CLUSTER_CONFIG.

You really don't want to boot a node stand-alone from the same system disc as a currently running cluster which has the other nodes booted and running from that system disc, which is why when setting the boot paths up in the first place you boot from the distribution media and mount the target system disc /read/nocache before running boot_options.

Cheers, Colin (http://www.xdelta.co.uk).
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
Colin Butcher
Esteemed Contributor

Re: Clustering Problem.

Use the logical name LAN$DEVICE_DATABASE to create / modify the target node LAN device database, which is: SYS$SYSDEVICE:[SYSn.SYSEXE]LAN$DEVICE_DATABASE.DAT.
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
The Brit
Honored Contributor

Re: Clustering Problem.

Thanks Colin.

Just to be clear, your comment about booting standalone into a shared system disk "while other nodes are up", is well taken, and is part of why this was such a "pain in the butt" procedure. I was having to continually boot and shutdown blades to ensure that I didnt corrupt the disk.

To your main point. This looks to be exactly what I needed. (I had a feeling that I had read something about this some where but I couldnt find it documented anywhere). I haven't tried this yet, but I it looks like it should work. I will test it over the next couple of days.

If it does work, I will incorporate it into my "New Node" procedure, when needed. (It is not always required, since the "AUTONEGOTIATE" issue only applies to the Flex10 modules, it was not a problem with the older 1/10Gb VC Ethernet Modules.)

thanks again.

Dave.