Operating System - OpenVMS
Showing results for 
Search instead for 
Did you mean: 

Boot Alpha with multiple 5300A

Micah Nerren
Occasional Visitor

Boot Alpha with multiple 5300A

Hi All,

I am having an issue with an Alpha ES45 system. I inherited management duties of this system about a year ago and I'm not what you'd call an expert on Alpha/Openvms. I'm a long time Unix guy, but I've learned quite a lot about this particular system, just not enough obviously.

What I am trying to do is install a second Smart Array 5300A card into this machine. We bought a new drive shelf to expand storage (a 4354R) and a new card. The card is identical to the one already installed in the system which supports the internal drive bays.

I was able to configure the RAID 1 sets for the new drive shelf, via:

>>> set heap_expand 2mb
>>> run bios pyb0

However when it comes to boot the operating system (OpenVMS 7.3-1) it fails to boot properly unless I remove the NEW card from the system. I've tried moving it around to different slots, above and below the old card, which changes the device name, but no matter what it won't boot from the original card.

The error message is something like:

Failed to boot dya0.....
Hit Control/C to stop

(Something like that, it keeps repeating, not verbatim but I had to reboot quickly so didn't write it down, sorry.)

If I hit CTRL-C it drops me into the system console. "show dev" shows both cards, pya0 and pyb0.

Do I need to set something in the SRM console that I am missing? Are the bootflags different when booting with multiple cards? If I just yank the card and reboot everything works fine.

Unfortunately I have limited time to do these things, since this is a production system, so shutting down isn't easy. So I haven't had much time to mess around with it. Each thing I try if it doesn't work, yank the card and boot up the OS.

What would be the proper procedure for installing a secondary 5300A card into a machine like this? I have to be missing something and I'm not an expert on this hardware platform. I need the new card to be a secondary card, and I need to ensure that no existing device names change as a result of its installation. Unfortunately many parts of the system refer to drive names and not logical names (which I plan to slowly change, but that'll take time on a production box.)

Any help you could provide would be much appreciated!

Thanks, and have a happy new year!

Honored Contributor

Re: Boot Alpha with multiple 5300A

You might need a larger heap size.

For the typical SRM BIOS configuration steps, see the article:

Micah Nerren
Occasional Visitor

Re: Boot Alpha with multiple 5300A

Actually, that page is what got me this far :-) Your site has been invaluable to me, thank you for that.

I had to do "set heap_expand 4mb" in order to configure the card/create the raid arrays. Your page pointed me in the right direction for that. 2mb wasn't working.

When you do "set heap_expand" is that setting saved or only used for that session only? Do I need to set bootbios on the card even though it won't be booting? I haven't done that.

I made sure the firmware was current on the cards with the Openvms Alpha 8.4 firmware cd, so that shouldn't be an issue.

Honored Contributor

Re: Boot Alpha with multiple 5300A

Are these the two-port or the four-port boards?

At your next opportunity, please gather the particular error message text from the console. (SRM can be a maze of diagnostics, all slightly different.) In preparation for that, get a serial terminal connected to the box via the serial terminal, and configure the terminal emulator buffer to a suitably capacious value.

That heap size is implemented as a console environment variable. You can have a look at its value with the show heap_expand command, etc.

Check your SCSI wiring and see if the cards got mixed up or a cable got disconnected or swapped somewhere. Also check the wiring and termination on that new SCSI box, as that's a split-bus box.

Is this "new" gear known to work?

V7.3-1 is stale; you're right at the bleeding edge of support for that controller, too. V7.3-2 would be a better choice if you're looking for newer bits, as that's almost identical to what you're running, but still under PVS.

The official manual and documentation are here:



Check the controller firmware:

Micah Nerren
Occasional Visitor

Re: Boot Alpha with multiple 5300A

This is a two port board, dual channel. I have each channel of the drive enclosure on its own channel on the card. As far as I can tell, everything is cabled and terminated properly. I was able to create the arrays in the cards bios. There are only 2 scsi ports on the back of the enclosure, and each of these is connected to a cable, so there is nothing to terminate. Unless there is a switch for termination or something that I am missing.

This "new" gear is not known to work by me personally, other than the fact it was bought from a reputable dealer who will replace it if faulty. Its obviously older used equipment. However the drives appear in the cards bios utility. So its at least sorta working.

I agree about 7.3-1, thats a primary reason for the new drive shelf. I want to duplicate the system over to the new, larger drives, get them bootable as well and perform an upgrade there. That way I can always quickly switch back over to the original system should any problems arise. I plan on going to 7.3-2 asap, and then migrating up the path towards 8.4.

When I inherited the system it was stock 7.3-1, no patches or anything. So I've patched it up with all the latest 7.3-1 patches, but those are 5-6 years old. The software on the system should be fine with any upgrades all the way to 8.4, as its just cobol programs doing simple RMS. Nothing fancy or too brittle that I can see. We have a current support contract so no worries there either.

I'll try this again in the next week or so if I can find a window and get the detailed errors.

Thanks for your help!
Bob Blunt
Respected Contributor

Re: Boot Alpha with multiple 5300A

Micah, I would think that if you're booting from one 5300A that booting from two shouldn't be that much different. You might try booting with full diagnostics boot flags enabled to see if you're really "getting into" OpenVMS or if it seems like the failure is coming in the console itself. For example B -FL 0,30000 or similar. The following URL does say that support started in V7.3-2:


You may be booting from one controller by virtue of some latent hooks in the O/S. Do arrays on the 5300A appear to OpenVMS as DY devices? That puzzles me a bit because those device types, back in the dark ages, were floppy disks and driver names don't usually get "reused" like that. You might also try booting from a V7.3-2 (or newer) CD to see what that reports as device names. I'd also check to see what version of the Alpha firmware you have installed on your ES45 and move to the latest version if available.
Steve Reece_3
Trusted Contributor

Re: Boot Alpha with multiple 5300A

Hi Bob/Micah,

If I dredge the darkest corners of my memory, I think the disks on the SmartArray at console level come up as DY devices but then switch to DK when VMS sees them.

The SET BOOTBIOS command is important in order to see the partitions at console level on the Alpha. If you don't do this, you won't see the disk sets at console level at all and so won't be able to boot from them. This is maintained across power downs so you might want to check what the setting of this is Micah. Type in SHOW BOOTBIOS at the >>> prompt.

So, from this, you should realize that you'll only be able to set one card at a time to show devices at console level. This shouldn't affect the ability to use the other card's devices once VMS has booted.

It's important to make sure that all of the disks have the latest firmware version and, if necessary, update them. If you don't then there is a fair chance that you'll get phantom errors on the drives as they fail to respond when the card asks them to and the card marks them as bad. You can't update the firmware on an AlphaServer, you've got to do that on a ProLiant, although you don't need to do it through a SmartArray card, a SCSI card will do.

You also need to ensure that the card really is an Alpha one, a 5302A or 5304A and NOT one of the ProLiant only cards (5302 or 5304 - i.e. without the A at the end.) The ProLiant only cards seem to work in Alpha systems until the load on the devices increases at which time it falls to bits (I speak from experience!)

I've never got to the bottom of what the difference is and I suspect it's actually just firmware, but either way the cards without the A at the end of the part number are a lot cheaper than the Alpha ones and they just don't make the grade in my experience.

You haven't said what you're doing with the disks at a VMS level, whether they're just presented as JBODs, plain RAIDsets or you're using the SW-RAID driver to slice and dice the disks into smaller partitions. Remember with the SW-RAID driver that the maximum drive size to partition is 500GB due to the maximum file size on VMS. This limit doesn't apply if you're just using the RAIDsets with VMS (i.e. using the DK devices directly) so you'll be able to go to the 1TB that VMS v7.3-1 supports.

Hope this helps.

Micah Nerren
Occasional Visitor

Re: Boot Alpha with multiple 5300A

Ok, I'll try to clarify each item as requested.

The card is in every way identical to the existing card on the system, part numbers, etc. so I'm pretty sure its an Alpha supported card. They both have the latest firmware installed via the Firmware Update V7.3 CD.

There is no way I can update the firmware of the drives on a proliant system. However the drives should be fine, I'm pretty sure they are "refurbished" drives from another Alpha.

The OS never loads. The error I am getting happens at the console level, and if I hit CONTROL-C while the error is repeating itself, it drops me to a console prompt. OpenVMS never loads.

Regarding set bootbios, I can see that it is set for the current card, and I don't need to boot off the new card. If I move the new card around in the slots to before and after the old card, the old card gets differing device names (at the console level) of pya0 and pyb0. Either way, the error follows the card name, so if the original card is PYA0, the error references it cannot boot from PYA0. If the card name changed to PYB0, the error says cannot boot from PYB0.

I noticed that if I do a show dev with the new card installed, it does not show the devices (the console DY devices). However, it also does not show the devices for the original card. I can only boot from and see the existing hard drives if the new card is not physically installed on the system.

I am trying to present the new drives as hardware mirrored drive sets. We have 10 drives, mirrored into 5 sets. No software raid or shadowing going on.

So maybe the new drive has some settings on it that causes the system to go whacky when its installed? I dunno. When I next get a chance to plug it in and see whats going on I'll try to get more detailed errors (not much though, its not very descriptive).

Micah Nerren
Occasional Visitor

Re: Boot Alpha with multiple 5300A

I double checked the SCSI cards, both the original one and the additional one we are trying to add to the system.

BOTH of them have part number 171383-001. I can't find whether or not this card is a 5302A or 5302. It appears to be a Proliant Smart Array card in google searches? Not sure though. At any rate, the current card has worked fine on this system for many years.

Bob Blunt
Respected Contributor

Re: Boot Alpha with multiple 5300A

Micah, you said something about configuring the card(s) using (I think) an on-board BIOS? I never was able to get any Smart Array cards for our lab so this is *just* a SWAG. Are there any settings in the BIOS configuration that could indicate or set which card is primary or bootable? I vaguely remember that some of the older RAID cards had special settings that controlled device numbering so I'd look for something like that. Those cards do seem to be viable for the Alpha systems and are supported, generally speaking, by OpenVMS. I'm not sure if multiple cards were ever tested so you might be plowing new ground.