- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: OVMS 7.2.1 cluster with RA3000
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 05:40 AM
тАО05-26-2009 05:40 AM
I'm going to receive some hardware soon and need to setup a 2-node OpenVMS cluster using OVMS 7.2.1.
The hardware will be 2 DS20E with 3X-KZPDC and 6-slot drive cage. The cluster storage will be KZPBA differential SCSI in each Alpha and Storage Works RA3000 with dual HSZ22 controllers. For Cluster Interconnect I'm going to use CCMAB memory channel.
I'm not a beginner, but unfortunately I have not done this for a long time and don't remember a lot of details anymore. I will have to setup everything from scratch including the cabling.
If I remember correctly I will use cluster_config, which also creates a local modparams.dat in each system root. I should set alloclass=1, votes=1. The location for "sysdump.dmp" is defined at the srm console, which I can direct to a local drive. I should not forget to remove the termination resistors on each HSZ22. What about alloclass and device naming on the RA3000 console?
Do you have some more tips to setup the system?
I have installed a PC with RA3000 SW 2.1 and will use serial connections and Windows GUI to configure the storage. I have found a good document "Storage Works RA3000 for OpenVMS SCSI Clusters Application Note", Version 1.0, on the CD, but many pages of the document (cluster.pdf) are truncated, in particular the tables.
Does anyone have a better version of this document?
Best Regards,
Markus
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 05:54 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 06:41 AM
тАО05-26-2009 06:41 AM
Re: OVMS 7.2.1 cluster with RA3000
Memory Channel:
I have reviewed some more documentation meanwhile and I need to set jumpers on each of the memory channel cards in the system accordingly. J1 - Hub mode, node 0 jumper 2-3, node 1 no jumper. I will use BN39B-10 cable to connect the two cards. J3 needs to be on pin 1-2 for OVMS. Once I installed and connected the cards I can run "mc_diag" and "mc_cable" on each system srm console to verify the installation.
Question: J5 Alphaserver 8x00 mode - should I set this on DS20E?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 11:31 AM
тАО05-26-2009 11:31 AM
Re: OVMS 7.2.1 cluster with RA3000
fwiw
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 12:52 PM
тАО05-26-2009 12:52 PM
Re: OVMS 7.2.1 cluster with RA3000
J5:
J5 - AlphaServer 8X00 Mode
Increases the maximum sustainable bandwidth of 8X00 platforms by 10MB/s. If this jumper is inadvertently set in any other platform, the maximum sustainable
bandwidth will decrease by 10MB/s. This jumper may be overridden by the Module Configuration Register (MODCFG) in case it is not installed properly.
hth,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 08:04 PM
тАО05-26-2009 08:04 PM
Re: OVMS 7.2.1 cluster with RA3000
I have a collection of RA3000 stuff on our public server.
http://ftp.vsm.com.au/ra3000/
or
ftp://ftp.vsm.com.au/ra3000/
The OpenVMS clustering documentation has notes on setting port-specfic allocation classes for shared SCSI storage.
http://h71000.www7.hp.com/doc/82final/6318/6318pro.html
http://h71000.www7.hp.com/doc/731final/4477/4477pro.html (esp. Chapter 6)
Regards,
Jeremy Begg
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2009 08:25 PM
тАО05-26-2009 08:25 PM
Re: OVMS 7.2.1 cluster with RA3000
When you use the same value for the host allocation classes you will have to deal with duplicate device names. If I remember correctly V7.2-1 was the first release to implement PACs completely working, so a cluster with all unique names and too many surprises was possible.
Be careful with the RA3000, though! It is a low-level OEMed box that does not even have embedded cache batteries and requires an external UPS. Good luck with it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-28-2009 02:20 AM
тАО05-28-2009 02:20 AM
Re: OVMS 7.2.1 cluster with RA3000
Regarding local drives, should they not show up as nodename$dkxxx, or do I need to set this somewhere?
Where do I specify that I want to use Memory Channel for cluster interconnect. Is there just the change option in the cluster config utility, or can I use it right from the beginning?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-28-2009 11:03 PM
тАО05-28-2009 11:03 PM
Re: OVMS 7.2.1 cluster with RA3000
It's a while since I set up the shared SCSI cluster here, but I recall that it certainly wasn't necessary to set ALLOCLASS the same on both nodes. In fact it's recommended that ALLOCLASS *not* be the same on all nodes if they have local (non-shared) storage.
The shared SCSI bus must be set up using "port" allocation classes as explained in the OpenVMS documentation. Paraphrasing from Appendix A in the "Guidelines for OpenVMS Cluster Configurations"...
"All host adapters attached to a shared SCSI bus must have the same OpenVMS device name (e.g. PKA0) unless port allocation classes are used. Each system attached to a shared SCSI bus must have a non-zero disk allocation class value (ALLOCLASS)."
Port allocation classes are enabled by the SYSGEN parameter DEVICE_NAMING (set it to 1) and then use CLUSTER_CONFIG to set the port allocation class value. It creates the file SYS$SYSTEM:SYS$DEVICES.DAT which you can edit if you wish (but probably shouldn't).
Note that if ALLOCLASS is non-zero, local (non-shared) drives will be $ccc$Ddcuuu: where 'ccc' is the ALLOCLASS value. You can still refer to them as node$Ddcuuu: but SHOW DEVICE will show the ALLOCLASS form of the name.
I'm not sure about Memory Channel, I've never used it. I notice there are a whole load of MC_SERVICES_Pn SYSGEN parameters, some of which may be relevant. Note that SCS will use whatever paths are available for inter-node communication so it's quite possible you don't have to do anything.
Regards,
Jeremy Begg
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-29-2009 03:55 AM
тАО05-29-2009 03:55 AM
Re: OVMS 7.2.1 cluster with RA3000
thanks a lot for your message. Meanwhile I have also done some reading in the OpenVMS Cluster Systems manual. Together with your info I'm concluding the following so far:
Port allocation is enabled by DEVICE_NAMING=1 and SCSSYSTEMIDH=0 sysgen parameters. Port allocation is a designation for all ports attached to a single interconnect. It replaces the node allocation class (ALLOCLASS) in the device names.
Port allocation class 0 does not become part of the device name. Instead, the name of the node to which the device is attached becomes the first part of the device name. The controller letter PKA, PKB, remains in the device name. The result will be e.g. node$DKA100, node$DKB100.
Port allocation class -1 disables port allocation and uses node ALLOCLASS for the bus interconnect.
A Port allocation class 1-32767 is used as the first part of device names for the attached devices. e.g. Port allocation class = 2 or 3, devices names result in $2$DKA100 or $3$DKA100
All nodes connecting to a shared SCSI bus must use port allocation class. All nodes of a cluster must use the same device names for devices attached to the cluster storage. Port allocation class can therefore not be 0, or -1.
My plan is:
- port allocation class 0 for all local controllers
- port allocation class 2 for the shared scsi bus.
Port allocation class can be assigned when using cluster_config.com to setup the cluster, or using SET/CLASS PKB 2 at the SYSBOOT> prompt (boot -fl 0,1). Or edit sys$system:sys$devices.dat manually.
Result:
Local devices will be nodeX$DKA100 on nodeX, and nodeY$DKA100 on nodeY. For the shared bus I will use port allocation class=2 and devices will show up like $2$DKA0, $2$DKA1 on nodes X and Y of the cluster.
Questions:
What could be the reason for assigning -1 to a bus and disable port allocation? Anything to do with SRM console variables? e.g. dumpfile?
I plan to use DUMPSTYLE=14 sysgen parameter and specify a local directory on a local drive in SRM sysdump variable.
I should disable SCSI reset on the adapters, in case of KZPBA, I read that I need to set the SRM OS_type variable to windows_nt to boot into the ARC utility, and than change it back after modifying card parameters. Is this correct?
Thanks,
Markus