HPE SimpliVity

Strange MTU findings on a new deployed 4 Node stretched cluster

 
fahlis
Frequent Advisor

Strange MTU findings on a new deployed 4 Node stretched cluster

Hello
I have recently deployed a 4 Node stretched cluster with OmniStack 4.0.1 U1 for a customer and to my best effort validated the correct MTU settings all over.

Physical Switches, vSwitch1 (Storage/Federation) and vmks on vSwitch1 are all set to Jumbo Frame 9000. However for this 4 node cluster as well as another customers 4 node cluster when I check in vCenter under "Host-Configure-Physical Adapter" and choose one of the vmnics for vSwitch1 I see the value for MTU is 1500

When comparing this with 4 other customers 2 node clusters I see the value MTU 0 for all these.

Can you please explain why I see MTU 1500 for both of the 4 node clusters but not for the 2 node clusters ?

I see two things that corrolates for the two 4 node clusters.

1. They are both stretched 4 node clusters.

2. They both use Nexus Switches.

See below info from one of these environments.

If I am guessing the settings are taken from the physical switch port ?

The network team has validated that all involved switchports are set to Jumbo Frame, so I can't understand why we see MTU 1500 here on the vmnics

Any ideas are welcome.

MTU Settings.jpg

 

4 REPLIES 4
Mohsina_4
HPE Pro

Re: Strange MTU findings on a new deployed 4 Node stretched cluster

Hi @fahlis 

Thank you for posting your query. As I see, you are looking for “Strange MTU findings on a new deployed 4 Node stretched cluster”.

Our engineers are looking into this and we will reply to you shortly.

Kindly await with us.”

Mohsina

Accept or Kudo

Imobi
Occasional Advisor

Re: Strange MTU findings on a new deployed 4 Node stretched cluster

Hi @fahlis,

The issue reported is quite unique, as to understand the issue better, It would be good to know how it was deployed.

The Deployment Orchestrator Logs will provide the first clue of the MTU size was set for the particular NIC cards if that is part of the deployment when selected from the Deployment Manager.  Furthermore, a look into the OS (ESXi) logs will also give more details on how this parameter was setup.

In short, this requires a logs analysis to find the possibility of any issue or the cause.

Thanks and regards

Imobi

 

 

 

I am a HPE Employee
db13
HPE Pro

Re: Strange MTU findings on a new deployed 4 Node stretched cluster

Hi @fahlis, the MTU is configured on the vSwitch and those values would be reflected on the vmnic as there is not configuration of MTU on the physical adapters. The VMkernel Ports would be configured separately and require you to expilicity set the MTU on those as well. If you are seeing 1500 on the vmnics, it is due to the vSwitch (for which it is an uplink to) being configured at 1500. Setting the vSwitch to 9000 would cause the vmnics to then report 9000.

Furthermore, the information displayed on the physical adapter page is pulled from the Cisco device via CDP protocol. This would indicate that the switch has the port set for 1500 on the interface (statically or by negotiation), or the interface uses a feature like QoS, which defaults to 1500.

Also, as a side note regarding MTU: The default MTU for the Omnistack Virtual Controller (OVC) is 1500 for the Management and should only be reconfigured for jumbo frames if the end-to-end connectivity to all other OVCs can support that MTU. 

 

I am an HPE Employee
A quick resolution to technical issues for your HP Enterprise products is just a click away HPE Support Center Knowledge-base

Accept or Kudo

fahlis
Frequent Advisor

Re: Strange MTU findings on a new deployed 4 Node stretched cluster

Hi @db13
Thanks for your reply and thorough explanation.
I am well aware of the requirements. It was deployed with MTU 1500 for management (vSwitch0) and MTU 9000 for storage/federation (vSwitch1). I have also double checked every parts. The vmk's are also correct set. The networkteam says the ports involved are set correct (Nexus switches). No QoS and so on.

Just to add I have verified Jumbo Frame packet flow on all ends using both ping and vmkping.
I also checked the orchestrator log myself and could not find anything odd there.
Could it be a GUI bug in vCenter perhaps?