HPE OneView
cancel
Showing results for 
Search instead for 
Did you mean: 

Update from Logical Internconnect Group - Drop Network Connectivity

gilestownsend
Frequent Visitor

Update from Logical Internconnect Group - Drop Network Connectivity

Hi,

We have been testing OneView (originally 1.1 but recently upgraded to 1.20) for our infrastructure. When modifying a Logical Interconnect Group, the Logical Interconnects that are part of that group display the message 'the logical interconnect is inconsistent with the logical interconnect group...'.

I select 'Update from Group' in the Logical Interconnect task menu.

It then presents a warning that updating from group will "result in a reconfiguration of the entire interconnect. This may disrupt network connectivity for server profiles using the logical interconnect."

 

I was curious to see the impact of this, I ran a continous ping to one of the ESXi hosts within the enclosure to be updated.

I applied the update but did not see any packet drops.

I did check the vmkwarning.log on the ESXi host and it did show the physical network adapters down briefly (approximately 1.5 seconds).

 

I ran the test a few more times on different LIs and some did not show a down adapter but most did.

 

Our preference would be to apply any changes (i.e. assigning new networks to Uplink Sets) to the logical interconnect group and then push down the changes to each of the interconnect. That way we only have to make the change once rather than on each logical interconnect (as well as the group). However if the price is a disconnection for all server profiles/hosts/blades in all enclosures updated then that is too risky for us.

 

Is there any way to do this but only update one interconnect at a time so a server profile with A/A connections across both would not lose all connections but only half? Or is the only way to update each LI individually?

 

I suspect not, I get the impression that this is really only designed for updating a new enclosure rather than an existing one (at least not one with active servers).

I did look at the powershell library but there did not appear to be a cmdlet for updating an existing UplinkSet?

 

What is your position on updating from LI group and the risk to active servers?

 

Thanks for any help. I do really like a lot of the improvements OneView looks like it can bring to simplfying management of medium-large infrastructures like ours.

 

Giles

5 REPLIES
ChrisLynchHPE
Neighborhood Moderator

Re: Update from Logical Internconnect Group - Drop Network Connectivity

Hello, and welcome to the HP OneView Community Forums @gilestownsend.

 

There were fixes intoduced in the 1.20 release that addresses certain situations with Uplink Set configuration changes would cause a momentary loss of connecitivity.

 

What sort of configuration changes were you making that would cause the LI to become inconsistent with the LIG?  Also, have you verified the vSphere Solution Recipe matches your host configuration?

 

If we can't solve your issue here quickly, then I would suggest you open a support case so our support organization can assist you with resolution.  And if this is a bug and a patch would need to be created to address, a support case is the only method to elevate the issue to engineering to fix.

gilestownsend
Frequent Visitor

Re: Update from Logical Internconnect Group - Drop Network Connectivity

Hi,

 

I tested on 1.20.  I couldn't find any reference to Uplink Set fix you mention in the release notes. Can you point me to where this is documented?

 

We use the HP ESXi 5.5u2 custom image in conjunction with SPP 201409 so we should have the firmware/drivers specified in the receipe.

 

The change is when creating new (ethernet) networks (an A and B network for Active/Active across two interconnects) we add A version to Uplink Set A (on first Interconnect) and B version to UplinkSet B (on seconnd interconnect).

This is something we would do for any new VLAN for our environment.

The existing process on Virtual Connect is similar but obviously the concepts have changed a little bit (Ethernet Network added to a Shared Uplink Set at creation)

 

This change can be done manually on each LI or it can be retrieved from the LIG, which is where we encouter the warning.

 

If its by design and is not a bug, which the warning suggest then thats how it is but I was after some clarity/best practices.

 

Cheers

 

Giles

 

ChrisLynchHPE
Neighborhood Moderator

Re: Update from Logical Internconnect Group - Drop Network Connectivity

The fix was actually in the 1.10.05 and 1.10.07 patch, and carried over to the 1.20 update.  What you are doing is Best Practice.  Can you tell me what version of VC firmware you are using?  What adapter and what their firmware/driver vesions are?  Do they match what is documented in the vSphere 5 Solution Recipe Guides I linked to?

gilestownsend
Frequent Visitor

Re: Update from Logical Internconnect Group - Drop Network Connectivity

Hi,

 

We have two enclosures managed in Oneview both on VC FW v4.21 (although need to upgrade this to v4.31. very soon to permit Gen9 blades).

We use HP BL460c blades using HP FlexFabric 10GB 554FLB LOM. Driver/Firmware will that in SPP 201409/HP 5.5u2 iso.

I'll double check on host and come back to you on that.

 

So you bug you mentioned is that related to the warning I am seeing?

 

In short should we be able to update config from LI group without loss of network connectvity?

 

Cheers

 

Giles

ChrisLynchHPE
Neighborhood Moderator

Re: Update from Logical Internconnect Group - Drop Network Connectivity

The warning message will always be displayed as changes you are about to make could impact existing traffic.  As operations are asyncronous background tasks, we have no way of knowing if systems use a particular network, or if a network that is being removed is in use.  So, we generically state the operation could impact systems if you are removing or moving uplink assignment or networks (just to provide a few examples) in your Logical Interconnect Edit or Update From Group operation.

 

The bug I was referring to was that we would redeploy the entire Uplink Set configuration and cause an outage, even when you were adding a new network to an existing Uplink Set in a Logical Interconnect (again either by Edit or Update From Group Action.)