Array Setup and Networking
1753464 Members
4937 Online
108794 Solutions
New Discussion юеВ

Re: Jumbo frames end-to-end? Or not?

 
SOLVED
Go to solution
Nick_Dyer
Honored Contributor

Re: Jumbo frames end-to-end? Or not?

I have to +1 Arne Polman here. Unless you're looking to absolutely stress a 10GbE network with major sequential IO, Jumbo Frames will give little performance benefit vs the relative management & support overhead it takes to administer and troubleshoot the environment when it goes wrong - reason being it acts like a "house of cards"; one misconfiguration in the environment and the whole thing falls down.

Jumbo Frames should only really be considered on 10GbE networks (1GbE rarely sees any improvement), and only after the environment has been run at 1500 MTU to check the performance and latency first. In fact Support's best practice is to install & configure on 1500 MTU, and then look at switching to Jumbos only if there may be a good case to do so.

I think of KISS - Keep It Simple, Stupid

Nick Dyer
twitter: @nick_dyer_
alex_goltz
Advisor

Re: Jumbo frames end-to-end? Or not?

Add me to the list of jumbo frames on iSCSI Initiators on 10Gbit.  Our heaviest workloads are SQL 2012 sequential reads.  At times our data warehouse VM will get close to 2 GB/sec if the cache hit ratio is near 100%.

This is probably the only scenario we have that gets the more mileage out of the additional payload size.

I would agree with Nick Dyer.  Test 1500 first, then move to 9000 if you really feel the calling to do so.

To whomever is interested:

One command I like to run on the guest VM to make sure the jumbo frame settings are correct in all areas is:  ping <array iSCSI IP> -f -l 8972                Don't use the iSCSI discovery IP. Use one of the actual data IPs.  And use a lowercase "L" for the second switch.

davecramp16
Occasional Advisor

Re: Jumbo frames end-to-end? Or not?

I find it's best to explain Jumbo Frames as a way to reduce workload of switches and network adapters rather than something that's really going to increase performance.

Increasing the packet size can reduce the number of packets required to send traffic, which reduces the workload of switches and other network equipment (adapters etc..) especially in a TCP world where the equipment runs checksums and other sequence checking on every packet.

This is a lot more visible on 10Gb networks where the number of packets per second will typically be much higher than that of a 1Gbit network.