1753797 Members
7308 Online
108803 Solutions
New Discussion юеВ

Open VMS Cluster

 
DURAY
Occasional Contributor

Open VMS Cluster

I am trying to add a second member to a VMS Cluster. The first boot of the second member into the cluster fails. message: SRM>>> Failed to send read to dkb101.... SRM>>> Failed to read...... SRM>>> Failed bootstrap.
The first member is a DS20 and the second is an alpha4000. The common system disk resides on a RA8000 configured in multibus failover and with two shared scsi busses. The creation of the second member disk infrastructure was successfull with >cluster_config. The problem occurs during the first second member boot (b dkb101 -fl 1,0)

Any idea?
7 REPLIES 7
Kris Clippeleyr
Honored Contributor

Re: Open VMS Cluster

Does the Alpha4000 SRM see the boot disk?
What does
>>> SHOW DEVICE
tell you?
Have you tried an INIT on the SRM before attempting to boot?
Could you give us the output of the SRM commands
SHOW CONFIG
SHOW DEVICE
SHOW BOOT*
etc. ?
Regards,
Kris (aka Qkcl)
I'm gonna hit the highway like a battering ram on a silver-black phantom bike...
DURAY
Occasional Contributor

Re: Open VMS Cluster

Hi Kris,

All the checks you mentioned are done. The boot disk an the other ones are seen by all the cluster members. I think the problem is a SCSI problem because the error message occurs before the VMS banner so it is not an allocation class or somethink like that. If I try a converstional boot the problem occurs before the expected "SYSBOOT>" prompt.

>>> sho dev result:
...
$1$dkb101
$1$dkb102
$1$dkb201
$1$dkb202
...

Best regards

The two messages are corresponding to the same issue.
DURAY
Occasional Contributor

Re: Open VMS Cluster

I also have made an init before the first boot.

bootdef_dev = $1$dkb101

DS20 root : SYS0
4000 root : SYS1

Regards,
DURAY
Occasional Contributor

Re: Open VMS Cluster

VMS Version 7.2-1 & several patches
DS20 SRM 6.9-2
4000 5/300 SRM 6.1
....
Robert_Boyd
Respected Contributor

Re: Open VMS Cluster

I'm curious about something. You say that the results of the console >>> SHOW DEV results include device names with $1$dkxx in them.

Are those the actual device names as presented at the console? I didn't think that devices connected over a SCSI bus would display as $1$DKxx at the console level.

It would be educational to see the output of the command SHOW *BOOT* and SHOW DEVICE and make sure that the device listed as the default boot device and the list of devices show the same exact device name(s).

I have had situations where I thought I had the correct device name, but the device as seen by the console was not the exact spelling/name that I was telling it to use to boot from.

Have you tried booting the system from the VMS Install CD? If you do, you should be able to use the DCL interface menu selection to see what devices are actually visible from the machine with trouble booting.

Robert
Master you were right about 1 thing -- the negotiations were SHORT!
Volker Halle
Honored Contributor

Re: Open VMS Cluster

Duray,

if the HSZ80 is configured in multibus failover mode, your hosts each need to be configured with 2 SCSI adapters (KZPBA-CB) connecting to both HSZ80s controller modules.

On the console, you should see your HSZ disks via the 2 pathes (probably as DKAxxx and DKBxxx). You would need to enter both pathes in BOOTDEF_DEV.

Assuming that your boot disk dkb101 is online to the OTHER HSZ80 (i.e. not the one connected to your SCSI adapter 'B'), the console still sees the logical unit, but can't access it. In that case, you would need to boot from dkx101 (x being your other KZPBA-CB).

If your other host (DS20) is connected to the HSZ80s via 2 SCSI buses, you could try to switch access to DKx101 to the other SCSI adapter (path) with SET DEV/SWITCH/PATH=PKx0 $1$DKx101 and try to boot the AlphaServer 4000 again...

Chapter 6 is the Cluster Config manual is worth reading:

http://h71000.www7.hp.com/doc/82FINAL/6318/6318pro_005.html#mult_sup_ch

Volker.
DURAY
Occasional Contributor

Re: Open VMS Cluster

Many thanks for the informations and/or suggestions but the problem was not a software problem but hardware one.

It was not a VMS/Alloclass problem because the failure occurs before vms was starting.

.... A delivered SCSI terminator was defective rather inapropriate; LVD instead of HVD. After the replacement the second node in the shared SCSI bus was starting well.

Regards,

Eric