HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem - General
Showing results for 
Search instead for 
Did you mean: 

BL460c G6 -> Network -> Lom1 Lom2 Redundancy

Lash J

BL460c G6 -> Network -> Lom1 Lom2 Redundancy

We have BL460c G6 in an infrastructure, with Flex-10 Nics.

The blades initially have 2 LOMS( Lan On Motherboard) which are then split into 4 virtual nics each, 8 Virtual Nics total.

Nics are as LOM1a, 1b, 1c, 1d, 2a, 2b, 2c, 2d.

We have Virtual Connect Enet Modules with Shared Uplink Networks already in Redundancy at the Modules level.

My question is:

Since there is already redundancy at the Virtual Connect Modules level,

(a) It is then useless to setup HP Nic Teaming at the blade level for redundancy?

(b) Within a blade, since all connections have only one connection 'port' physically to the Enclosure chassis. IF ever LOM 1 loses connection to a fault within the blade, does it mean that LOM2 will also lose connection due to the fault.

I don't know if I've made myself clear enough.

Let me know I can provide a diagram if needed.

Will be glad to get some feedback.

Thank you.
Adrian Clint
Honored Contributor

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy

(a) No, you should still need NIC teaming. As its recommended you have 2 NICs (one on LOM1 and one on LOM2). This is to cope with failure of a NIC.

(b) Different physical NICs - connecting to physically different VC modules. So if "fault within blade" means a NIC failure then you will not lose connection.

There is no point though having two NICs connected to the same Network and the Network uplinking through only one VC.

The two NICs must be either connected to two different networks that go up thru two differnt VCs (Active/Active), or a single Network that uses uplinks in more than one VC (Active/Passive)
Honored Contributor

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy


I'll try to answere your questions for you.

(a) The teaming software provides redundancy on IP Level aswell. If either side of the virtual connect goes down you'll have one nic with the IP address up if you team one nic from each bay.

(b) Well you may have One big connector but within that you have loads of diffrent paths.

If LOM1 looses its connection doesn't automatically mean LOM2 fails aswell. Obviously that depends on the fault in the blade. In the way i'm getting your description, the LOMs are no diffrent from the LOMs of any other server Rack or Tower.
Adrian Clint
Honored Contributor

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy

Go have a look at the different scenarios for connectivity in the Virtual Connect http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01471917/c01471917.pdf
The Brit
Honored Contributor

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy

Additional comment.

VCM software failover between modules 1 <--> 2 can take upto 25-30 seconds. If you are not teamed between modules 1 & 2, then you will almost certainly have issues.

With teamed NICs, the failover is almost instantaneous.

Lash J

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy

Thanks for the answers.

My issue is as such.

The blades are running Windows 2008 R2 Core with Hyper-V along with Clustered Shared Volume.

We tried first setup with Teaming at Blade level + Redundancy at Virtual Connect level.

Everything is fine UNTIL Virtual Network at Hyper-V level needs to be configured and assigned to a Network Team configured with the HP utility.

At this stage, the network team is lost, and hyper-v can't bind itself to that team to create its virtual network.

HOWEVER, the same settings work fine in Windows 2008 R2 full installation with the GUI.

BUT, according to design, we need to go on Windows 2008 CORE only.

That's why we wanted to bypass the nic teaming at blade level.

Any comments regarding this?

Honored Contributor

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy

Hi Abdool,

The other posters are absolutely right, do not leave your blade NICs unteamed, otherwise you will sacrifice your ability to fail over quickly.

Are you telling us that you successfully built a team, but that team was dissolved when you attempted to configure Hyper-V to use it? You should make sure your team is configured properly. For a 2008 Core server you should use the CQNICCMD utility documented here:

good luck!
Lash J

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy

Hi, the team under Core is configured properly, we can get to the teaming GUI by running the 'hpteam.cpl' cmd at prompt.

The utility creates the team and connection to it is also good.

But, once in Hyper-V we want to bind the Virtual Network to that same team, what usually would be a 15-sec process, takes forever, more than 20-30 minutes, to finally end in an error.

Afterwards, there is no more connectivity to the IP of the team.

Honored Contributor

Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy

Check this white paper, it may have the answers:

The short answer from this document is to uninstall Hyper-V AND teaming software, then reinstall them in that order.