Converged Data Center Infrastructure
Showing results for 
Search instead for 
Do you mean 

Jumbo Frames-Lesson Learned

chuckk281 on ‎10-23-2013 06:20 AM

Some info from Malcolm:




I wanted to share a quick Jumbo Frame experience.


First, thank you Dan for your help on this.  Dan had a similar experience which helped me get pointed in the right direction when this issue came in last week.  And of course this all started with the network team and their Cisco team pointing to Virtual Connect as the most likely culprit as to why the client-server throughput through their switches and firewalls was so slow.  Now Cisco is scrambling to make their ASA run faster and the customer is looking at other firewall vendors that can push more IO:


My customer  just deployed a pair of HP Blade Enclosures with BL660s spread across them using Flex-10 and 8-24 VC FC interconnects.  Each enclosure has 4 x 10G uplinks using a single SUS (so they are 20G Active and 20G Passive).  Then using LACP and vPC on their Nexus 7Ks, the enclosures wire to 10G line cards which route to 10G ASAs and then to 10G Exadata hosts.  These are hosted in a facility so the team there always uses single SUSs and Active-Passive VC uplinks and since we have Flex-10 we have numerous 10G uplink ports we can wire up if and when we need them, before going Active-Active (dual SUSs).  Their goal is to push the 40G of Exadata Active ports to the max, from the BL660s running the Oracle Client SW.


Lessons Learned:


  1.  Blade to Blade, through a 10G Nexus line card can push close to 9G (8.7) using the default Frame or MTU size of 1500.  So as long as you have a recent Linux or Windows build the default TCP settings should move packets very fast between hosts.  But Jumbo Frames took the test results over 9Gbps.
    NIC = 9000 (default 1500) to VC = 9216 (the default) to Nexus switch = 9216 (default 1500) to VC = 9216 to NIC = 9000, all on the same VLAN (skipping the firewall for now) went to 9.7Gbps using Iperf (default settings and 5m runs).  And different versions of Iperf performed differently (v2 was faster than v3).  Netperf is another option.  And short tests perform worse than the 5m ones so stick with longer tests.
  2. Dan’s customer was testing VMotion traffic (I believe) between blades and were not getting the performance they expected.  Jumbo Frames on the ESX NIC settings really helped prove the Blade to Blade performance.
  3. When you put a firewall in the middle, you definitely want Jumbo Frames enabled (end to end).  So you need to enable Jumbo Frames on the firewall as well.  They have an ASA capable of 40Gbps of throughput and by adding Jumbo Frames you get about a 30+% boost through the links.
  4. When using M1 (and especially F1) 10G Nexus line cards, watch out for their over-subscription.  It is 4 to 1 once you leave each 4 port group.  You might need to spread your uplinks across dedicated 4-port groups to get the full 10G to other hosts or switches (depending on where they are plugged in).  The customer is now looking at spending hundreds of thousands of dollars upgrading a 1.2M pair of switches they just bought 3 years ago, when they bought the c7000s and Flex-10s (which are still solid).

0 Kudos
About the Author


27 Feb - 2 March 2017
Barcelona | Fira Gran Via
Mobile World Congress 2017
Hewlett Packard Enterprise at Mobile World Congress 2017, Barcelona | Fira Gran Via Location: Hall 3, Booth 3E11
Read more
Each Month in 2017
Software Expert Days - 2017
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all