Operating System - OpenVMS
1752801 Members
5554 Online
108789 Solutions
New Discussion юеВ

Re: HP cClass enclosures for OpenVMS blades

 
daferreira
Occasional Contributor

HP cClass enclosures for OpenVMS blades

the lit. on the C7000 enclosure mentions 4 redundant interconnect fabrics......being a total "rube" about VMS blades does this mean there is a new interconnect technology that is fiber based and not copper based as in the old days?
I realzie that there would be distance limitaions perhaps ....but would be preferable to using NIC cards I would think
7 REPLIES 7
Robert Gezelter
Honored Contributor

Re: HP cClass enclosures for OpenVMS blades

daferreira,

First, welcome to the OpenVMS ITRC Forum!

Since you are referring to a piece of actual HP documentation (presumably from the www), the citation to the actual document, and page would be sincerely appreciated. That way, we can be precisely clear as to what question is being asked.

There are integral LANs built into the C-class enclosures, with the "interfaces" integral to the blades.

When you provide the precise citation, I can be more specific with my clarification.

- Bob Gezelter, http://www.rlgsc.com
Bill Hall
Honored Contributor

Re: HP cClass enclosures for OpenVMS blades

No new interconnect technology in regards to "cable media" and distances. As the Quick Specs detail, up to 4 redundant interconnects in the enclosure.

"Up to 4 different interconnect fabrics (Ethernet, FC, IB, iSCSI, SAS, etc.) supported simultaneously within the enclosure."

The FC is 4Gb or 8Gb, Ethernet is 1Gb or 10Gb. Read up on the Virtual Connect modules, that's where I think most of the "new" and cool stuff is in an HP BladeCenter.

Bill
Bill Hall
daferreira
Occasional Contributor

Re: HP cClass enclosures for OpenVMS blades

THX to you both for replying...
when you create a Virtual Connect domain are you craating a cluster? .... or does this just expand the range of your system management capability
Robert Gezelter
Honored Contributor

Re: HP cClass enclosures for OpenVMS blades

daferreira,

First, FC does not necessarily imply the use of fiber (see http://en.wikipedia.org/wiki/Fibre_Channel ). It is an unfortunate naming choice in that respect.

Virtual Connect is not a replacement for clusters, it is a complement. The page for HP Virtual Connect is presently http://h18000.www1.hp.com/products/blades/virtualconnect/infrastructure.html . What Virtual Connect allows one to do is manage the LAN/SAN connectivity and addresses, making hardware substitution possible. It enables replacement of hardware, but it does not magically move the existing cluster member to the new blade. It only affects the off-board connectivity.

The question of whether one would want to use a VM migration scheme to move a OpenVMS cluster member is a more complex question. IMHO, such facilities are a complement, but by no means a replacement for an OpenVMS cluster.

The interrelationship between OpenVMS clusters, HPVM, and blade migration is a complex and subtle question of balancing strengths and weaknesses, something I noted in my recent HP Enterprise Technology Forum presentation "Evolving OpenVMS Environments: An Exercise In Continuous Computing", slides and audio available at http://www.rlgsc.com/hptechnologyforum/2009/continuous-openvms.html

- Bob Gezelter, http://www.rlgsc.com
Bill Hall
Honored Contributor

Re: HP cClass enclosures for OpenVMS blades

Virtual Connect has nothing to do with clustering. It has nothing to do with the host OS running on a Blade.

Virtual Connect puts an abstraction layer between the servers (Blades) and the external networks so the LAN and SAN connections and the security configuration/setup for those LAN and SAN connections only has to be done once.

VC allows you to assign "private" MAC and WWID addresses that are of the embedded hardware addresses to "profiles" that are assigned to physical blades. You configure SAN switch/fabric zoning, storage array LUN security and/or Ethernet switch security once to the assigned addresses. Then in the future, you can move profiles or replace the underlying hardware (that has factory addressing on it) without having to go back and update security on the SAN or Ethernet after the hardware change.

Take a look at some of HP's Technology Briefs such as this one: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00814156/c00814156.pdf. It may clarify some things for you. It takes a lot of upfront complexity to simplify managing in the future :-).

Bill
Bill Hall
Hoff
Honored Contributor

Re: HP cClass enclosures for OpenVMS blades

"Virtual Connect"? Think "NAT meets Fibre Channel".

Basically, a whole infrastructure of tracking the pieces and parts of a Fibre Channel SAN got built up (and with a whole lot of baggage and UIs and other fun), and it became increasingly difficult to manage the results over time, so a means of dealing with the technical and configuration and organizational problems that ensued is now getting built up.
Clarete Riana
Valued Contributor

Re: HP cClass enclosures for OpenVMS blades

Virtual Connect ( VC) is a feature provided by Virtual connect Ethernet and Virtual connect FC interconnect modules. These interconnect modules can be placed in your blade enclosure in the appropriate interconnect bays depending the port mapping (depending on the placement of NIC cards and FC adapters in the blade mezzanine slots). Also, you can have two interconnect modules of the same type in adjacent slots for redundancy.

There is a Virtual Connect manager software that runs on the VC Ethernet module. This means that in order to use VC functionality, one has to have the VC Ethernet interconnect module. If virtual connect functionality is desired for FC, then a virtual connect FC module should be present in the enclosure in appropriate slot. But only one instance of VC manager is active for both Ethernet and FC, which runs on the VC Ethernet module.

Coming to the functionality of VC, it provides ease of management, ease of replacement of failed hardware and simplifies network connections. When Virtual Connect is used, the VC manager assigns virtual MAC ID and virtual WWNs( World-Wide Names) for the NIC and FC cards respectively for a particular server bay slot. Also a default boot device can be configured for a particular server bay slot. So on an event of failure of a entire server or any adapters, the replaced hardware in the bay assumes the virtual MAC and WWNs without any manual intervention. Now since the external network sees the virtual WWNs and MAC IDs, there would be no changes required for any selective storage presentation rules already in place. In the absence of VC , any hardware replacement would result in new MAC IDs and WWNs in the network and hence any storage presentation rules have to be re-done manually.

Thus you can see that VC has nothing to do with OpenVMS clusters.