- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- bl860c Itanium Blades in enclosure with Flex10 VC ...
Operating System - OpenVMS
1753914
Members
9104
Online
108810
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-26-2011 07:49 AM
04-26-2011 07:49 AM
bl860c Itanium Blades in enclosure with Flex10 VC modules.
This is an informational message for anyone who is running or is considering running bl8x0c blades in an enclosure with Flex10 VC modules (note this might not apply to the new "i2" blades)
This issue has been mentioned and/or discussed in several earlier threads, however I wanted to put out a solution which appears to get around the problem.
The PROBLEM.
The onboard network devices on the bl8x0c blades, running OpenVMS, default to AUTONEGOTIATE=enabled.
The downlinks on the Flex10 VC module will not negotiate.
HP recommends that the Flex10 downlink ports be set to "1Gb Custom" setting, and that AUTO-NEGOTIATE be disabled at the VMS (LANCP) level.
Scenario:
You have an existing Blade Cluster, and you want to add a new blade as a new cluster member.
1. Run CLUSTER_CONFIG
2. Try to boot the new blade into the cluster.
Result:
During initialization, network devices are intitialized, (right after "Waiting to join or form a VMS Cluster"), however the links do not come up (because of the "autonegotiate" thing) and therefore your membership request never reaches the existing cluster. Startup will hang at this point. (Note also that since this is the first boot into the cluster, votes are set to 0, to protect the system disk, and so the hang never ends)
It is possible to get past this by;
1. shut down the cluster.
2. conversational boot of new blade.
3. set votes=1 expected_votes=1, continue
4. blade will boot up as first node in cluster, allowing you to log in and configure the network devices (i.e. disable autonegotiate.)
It is then necessary to shutdown the node, reboot the old cluster, and then reboot the new blade.
Obviously this is a very cumbersome procedure.
--------------------------------------
Thanks to some discussions with Colin Butcher, it appears that there is a simple shortcut which solves the problem. It might even be optimal given that the bl8x0c blades have pretty much identical hardware.
1. Run cluster_config as normal
2. from the existing (blade) node, copy SYS$SPECIFIC:[SYSEXE]LAN$DEVICE_DATABASE.DAT (and .SEQ), to SYS$SYSDEVICE:[.SYSEXE]
3. Do conversational boot of new blade.
4. Check that VOTES=0, EXPECTED_VOTES=<# of voting systems currently up>
5. Continue.
You should see (after the "Waiting to form or join a VMSCLuster") the Network Devices, not only initialize, but you should also see the links come up. This all occurs before the node joins the cluster.
The node should then continue to join the cluster run net$configure and autogen, and then reboot, as you would expect.
Again, thanks to Colin for the suggestion, and I hope this helps someone in the future. I guess that many in the forum would have come up with this solution independently, however for me to figure these things out, I need it to be engraved on the head of a 3lb block hammer and then have someone crack me over the skull with it (the light goes off when I regain conciousness).
Disclaimer: I dont know how applicable this solution would be, and I wouldnt recommend it, when there is a significant difference in the hardware between nodes.
for blades however, where the hardware configuration is more or less pre-ordained, it does seem like an acceptable option.
Additional benefit, If your existing cluster node has (locally) standard configuration options, i.e. lla devices for NIC teaming, etc, then this is also automatically set up on the new blade.
Dave.
This issue has been mentioned and/or discussed in several earlier threads, however I wanted to put out a solution which appears to get around the problem.
The PROBLEM.
The onboard network devices on the bl8x0c blades, running OpenVMS, default to AUTONEGOTIATE=enabled.
The downlinks on the Flex10 VC module will not negotiate.
HP recommends that the Flex10 downlink ports be set to "1Gb Custom" setting, and that AUTO-NEGOTIATE be disabled at the VMS (LANCP) level.
Scenario:
You have an existing Blade Cluster, and you want to add a new blade as a new cluster member.
1. Run CLUSTER_CONFIG
2. Try to boot the new blade into the cluster.
Result:
During initialization, network devices are intitialized, (right after "Waiting to join or form a VMS Cluster"), however the links do not come up (because of the "autonegotiate" thing) and therefore your membership request never reaches the existing cluster. Startup will hang at this point. (Note also that since this is the first boot into the cluster, votes are set to 0, to protect the system disk, and so the hang never ends)
It is possible to get past this by;
1. shut down the cluster.
2. conversational boot of new blade.
3. set votes=1 expected_votes=1, continue
4. blade will boot up as first node in cluster, allowing you to log in and configure the network devices (i.e. disable autonegotiate.)
It is then necessary to shutdown the node, reboot the old cluster, and then reboot the new blade.
Obviously this is a very cumbersome procedure.
--------------------------------------
Thanks to some discussions with Colin Butcher, it appears that there is a simple shortcut which solves the problem. It might even be optimal given that the bl8x0c blades have pretty much identical hardware.
1. Run cluster_config as normal
2. from the existing (blade) node, copy SYS$SPECIFIC:[SYSEXE]LAN$DEVICE_DATABASE.DAT (and .SEQ), to SYS$SYSDEVICE:[
3. Do conversational boot of new blade.
4. Check that VOTES=0, EXPECTED_VOTES=<# of voting systems currently up>
5. Continue.
You should see (after the "Waiting to form or join a VMSCLuster") the Network Devices, not only initialize, but you should also see the links come up. This all occurs before the node joins the cluster.
The node should then continue to join the cluster run net$configure and autogen, and then reboot, as you would expect.
Again, thanks to Colin for the suggestion, and I hope this helps someone in the future. I guess that many in the forum would have come up with this solution independently, however for me to figure these things out, I need it to be engraved on the head of a 3lb block hammer and then have someone crack me over the skull with it (the light goes off when I regain conciousness).
Disclaimer: I dont know how applicable this solution would be, and I wouldnt recommend it, when there is a significant difference in the hardware between nodes.
for blades however, where the hardware configuration is more or less pre-ordained, it does seem like an acceptable option.
Additional benefit, If your existing cluster node has (locally) standard configuration options, i.e. lla devices for NIC teaming, etc, then this is also automatically set up on the new blade.
Dave.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-26-2011 10:59 AM
04-26-2011 10:59 AM
Re: bl860c Itanium Blades in enclosure with Flex10 VC modules.
This was for information only.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP