- Community Home
- >
- Storage
- >
- HPE SimpliVity
- >
- Strange MTU findings on a new deployed 4 Node stre...
-
-
Forums
- Products
- Servers and Operating Systems
- Storage
- Software
- Services
- HPE GreenLake
- Company
- Events
- Webinars
- Partner Solutions and Certifications
- Local Language
- China - 简体中文
- Japan - 日本語
- Korea - 한국어
- Taiwan - 繁體中文
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Latin America
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Blog, Poland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-16-2021 05:53 AM
02-16-2021 05:53 AM
Strange MTU findings on a new deployed 4 Node stretched cluster
Hello
I have recently deployed a 4 Node stretched cluster with OmniStack 4.0.1 U1 for a customer and to my best effort validated the correct MTU settings all over.
Physical Switches, vSwitch1 (Storage/Federation) and vmks on vSwitch1 are all set to Jumbo Frame 9000. However for this 4 node cluster as well as another customers 4 node cluster when I check in vCenter under "Host-Configure-Physical Adapter" and choose one of the vmnics for vSwitch1 I see the value for MTU is 1500
When comparing this with 4 other customers 2 node clusters I see the value MTU 0 for all these.
Can you please explain why I see MTU 1500 for both of the 4 node clusters but not for the 2 node clusters ?
I see two things that corrolates for the two 4 node clusters.
1. They are both stretched 4 node clusters.
2. They both use Nexus Switches.
See below info from one of these environments.
If I am guessing the settings are taken from the physical switch port ?
The network team has validated that all involved switchports are set to Jumbo Frame, so I can't understand why we see MTU 1500 here on the vmnics
Any ideas are welcome.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2021 08:16 PM
02-17-2021 08:16 PM
Re: Strange MTU findings on a new deployed 4 Node stretched cluster
Hi @fahlis
Thank you for posting your query. As I see, you are looking for “Strange MTU findings on a new deployed 4 Node stretched cluster”.
Our engineers are looking into this and we will reply to you shortly.
Kindly await with us.”
Mohsina
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-18-2021 05:24 AM
02-18-2021 05:24 AM
Re: Strange MTU findings on a new deployed 4 Node stretched cluster
Hi @fahlis,
The issue reported is quite unique, as to understand the issue better, It would be good to know how it was deployed.
The Deployment Orchestrator Logs will provide the first clue of the MTU size was set for the particular NIC cards if that is part of the deployment when selected from the Deployment Manager. Furthermore, a look into the OS (ESXi) logs will also give more details on how this parameter was setup.
In short, this requires a logs analysis to find the possibility of any issue or the cause.
Thanks and regards
Imobi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-18-2021 09:47 PM - edited 02-26-2021 07:53 AM
02-18-2021 09:47 PM - edited 02-26-2021 07:53 AM
Re: Strange MTU findings on a new deployed 4 Node stretched cluster
Hi @fahlis, the MTU is configured on the vSwitch and those values would be reflected on the vmnic as there is not configuration of MTU on the physical adapters. The VMkernel Ports would be configured separately and require you to expilicity set the MTU on those as well. If you are seeing 1500 on the vmnics, it is due to the vSwitch (for which it is an uplink to) being configured at 1500. Setting the vSwitch to 9000 would cause the vmnics to then report 9000.
Furthermore, the information displayed on the physical adapter page is pulled from the Cisco device via CDP protocol. This would indicate that the switch has the port set for 1500 on the interface (statically or by negotiation), or the interface uses a feature like QoS, which defaults to 1500.
Also, as a side note regarding MTU: The default MTU for the Omnistack Virtual Controller (OVC) is 1500 for the Management and should only be reconfigured for jumbo frames if the end-to-end connectivity to all other OVCs can support that MTU.
A quick resolution to technical issues for your HP Enterprise products is just a click away HPE Support Center Knowledge-base
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2021 08:01 PM
03-02-2021 08:01 PM
Re: Strange MTU findings on a new deployed 4 Node stretched cluster
Thanks for your reply and thorough explanation.
I am well aware of the requirements. It was deployed with MTU 1500 for management (vSwitch0) and MTU 9000 for storage/federation (vSwitch1). I have also double checked every parts. The vmk's are also correct set. The networkteam says the ports involved are set correct (Nexus switches). No QoS and so on.
Just to add I have verified Jumbo Frame packet flow on all ends using both ping and vmkping.
I also checked the orchestrator log myself and could not find anything odd there.
Could it be a GUI bug in vCenter perhaps?
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2022 Hewlett Packard Enterprise Development LP