- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Open VMS Cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-27-2005 08:56 PM
тАО11-27-2005 08:56 PM
Open VMS Cluster
The first member is a DS20 and the second is an alpha4000. The common system disk resides on a RA8000 configured in multibus failover and with two shared scsi busses. The creation of the second member disk infrastructure was successfull with >cluster_config. The problem occurs during the first second member boot (b dkb101 -fl 1,0)
Any idea?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-27-2005 09:05 PM
тАО11-27-2005 09:05 PM
Re: Open VMS Cluster
What does
>>> SHOW DEVICE
tell you?
Have you tried an INIT on the SRM before attempting to boot?
Could you give us the output of the SRM commands
SHOW CONFIG
SHOW DEVICE
SHOW BOOT*
etc. ?
Regards,
Kris (aka Qkcl)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-28-2005 03:02 AM
тАО11-28-2005 03:02 AM
Re: Open VMS Cluster
All the checks you mentioned are done. The boot disk an the other ones are seen by all the cluster members. I think the problem is a SCSI problem because the error message occurs before the VMS banner so it is not an allocation class or somethink like that. If I try a converstional boot the problem occurs before the expected "SYSBOOT>" prompt.
>>> sho dev result:
...
$1$dkb101
$1$dkb102
$1$dkb201
$1$dkb202
...
Best regards
The two messages are corresponding to the same issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-28-2005 03:05 AM
тАО11-28-2005 03:05 AM
Re: Open VMS Cluster
bootdef_dev = $1$dkb101
DS20 root : SYS0
4000 root : SYS1
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-28-2005 03:12 AM
тАО11-28-2005 03:12 AM
Re: Open VMS Cluster
DS20 SRM 6.9-2
4000 5/300 SRM 6.1
....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-28-2005 03:33 AM
тАО11-28-2005 03:33 AM
Re: Open VMS Cluster
Are those the actual device names as presented at the console? I didn't think that devices connected over a SCSI bus would display as $1$DKxx at the console level.
It would be educational to see the output of the command SHOW *BOOT* and SHOW DEVICE and make sure that the device listed as the default boot device and the list of devices show the same exact device name(s).
I have had situations where I thought I had the correct device name, but the device as seen by the console was not the exact spelling/name that I was telling it to use to boot from.
Have you tried booting the system from the VMS Install CD? If you do, you should be able to use the DCL interface menu selection to see what devices are actually visible from the machine with trouble booting.
Robert
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-28-2005 03:53 AM
тАО11-28-2005 03:53 AM
Re: Open VMS Cluster
if the HSZ80 is configured in multibus failover mode, your hosts each need to be configured with 2 SCSI adapters (KZPBA-CB) connecting to both HSZ80s controller modules.
On the console, you should see your HSZ disks via the 2 pathes (probably as DKAxxx and DKBxxx). You would need to enter both pathes in BOOTDEF_DEV.
Assuming that your boot disk dkb101 is online to the OTHER HSZ80 (i.e. not the one connected to your SCSI adapter 'B'), the console still sees the logical unit, but can't access it. In that case, you would need to boot from dkx101 (x being your other KZPBA-CB).
If your other host (DS20) is connected to the HSZ80s via 2 SCSI buses, you could try to switch access to DKx101 to the other SCSI adapter (path) with SET DEV/SWITCH/PATH=PKx0 $1$DKx101 and try to boot the AlphaServer 4000 again...
Chapter 6 is the Cluster Config manual is worth reading:
http://h71000.www7.hp.com/doc/82FINAL/6318/6318pro_005.html#mult_sup_ch
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2005 11:24 PM
тАО12-06-2005 11:24 PM
Re: Open VMS Cluster
It was not a VMS/Alloclass problem because the failure occurs before vms was starting.
.... A delivered SCSI terminator was defective rather inapropriate; LVD instead of HVD. After the replacement the second node in the shared SCSI bus was starting well.
Regards,
Eric