HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem Virtual Connect
cancel
Showing results for 
Search instead for 
Did you mean: 

Supported VC configuration and our position on it?

 
chuckk281
Trusted Contributor

Supported VC configuration and our position on it?

Question from Andrea on a particular VC configuration:

 

****************

 

Speaking about VC Flex10 as Ethernet module, so: no FCoE and no FlexFabric modules.

 

I’ve seen from the VC Install Guide that this is a supported config:

[Bay 1] VC Ethernet      [Bay 2] Empty

[Bay 3] VC-FC                   [Bay 4] Empty

[Bay 5] Other/empty    [Bay 6] Other/empty

[Bay 7] Other/empty    [Bay 8] Other/empty

 

But …are these supported config too?

1)

[Bay 1] VC Ethernet      [Bay 2] VC Ethernet

[Bay 3] VC-FC                   [Bay 4] Empty

[Bay 5] VC-FC                   [Bay 6] Empty

[Bay 7] Other/empty    [Bay 8] Other/empty

2)

[Bay 1] VC Ethernet      [Bay 2] VC Ethernet

[Bay 3] VC-FC                   [Bay 4] Empty

[Bay 5] Empty                  [Bay 6] VC-FC

[Bay 7] Other/empty    [Bay 8] Other/empty

 

 

I can’t see any technical bonds that could create problems:

-         Each HBA has one port connected instead of two;

-         Each VC-FC will be managed by VCM;

-         No Mezzanine/LOM has mixed modules type for different ports

 

But I would like to understand if the config is ok from a supportability level.

This is not a “MUST”, i simply want to optimize the blade availability since someone sold 2 Qlogic HBA per blade but only included 2x VC-FC modules to my customer (…no comment…)

 

***************

 

Reply from Robert:

We do not typically see a single ethernet module in a configuration, but it is supported.  It just does not support high availability (redundancy and failover).

 

The first two bullets from page 35 of the Setup and Installation Guide from 2010 supported configurations section:

 

  • In all Virtual Connect configurations, a HP VC FlexFabric 10Gb/24-port Module must be installed in

interconnect bay 1. The embedded Virtual Connect Manager typically operates on this module.

 

  • To support high availability of the Virtual Connect environment, HP recommends that HP VC

FlexFabric 10Gb/24-port Modules be used in interconnect bays 1 and 2. The embedded Virtual

Connect Manager operates in an active/standby configuration.

 

To address the FC.  I have seen many configurations with the FC modules in bays 3 and 5, and even 3 and 6.  These designs provide separation for controller level redundancy, i.e. the redundant/failover ports are not ports on the same dual port HBA.  It is supported.

 

And info from Chad:

A good example of a single Flex-10 and/or FlexFabric installation is for high performance computing (HPC) implementations.

 

1)      It allows for adoption of Virtual Connect into a typically large networking solution

2)      It allows for internal traffic flows of HPC workloads

3)      HPC implementations typically do not require redundancy for most components

4)      If multiple networks are required, the fractionalized LOM can handle the traffic load (enterprise/private) split over 10GB

5)      Typically HPC solutions do not require SAN storage except for head node or storage node utilization (these nodes usually would be outside of a blade enclosure)

 

I run this solution for a large insurance company for their internal HPC solution. It works very well for their needs and significantly reduces the amount of cabling required.

 

****************

 

Comments?