MSA Storage

Re: iSCSI jumbo frame MSA 2050

 
CedricB
Occasional Advisor

iSCSI jumbo frame MSA 2050

Context
=======
We have a HPE MSA 2050 with 4 iSCSI 10Gb adapters (2 per controllers) connected to a Nexus switch.
A1 : 10.10.1.10  /  B1: 10.10.2.10
A2 : 10.10.1.11  /  B2 : 10.10.2.11
Option "Enable jumbo frame" activated.

ESXi 6.5 U3 is using this storage with 10Gb dedicated cards.
On ESXi, we have 2 vmkernel with diffrent ip subnet associated to 2 dédicated cards :
-1 10GB card with subnet 10.10.1.0/22
-a second 10GB card subnet 10.10.2.0/22.
VMKernel are in MTU 9000.

In the middle, we use a Nexus with MTU 1500 which is capable to transfrom 1500/9000 or 9000/1500

Problem :
========
ESXI hosts in 9000 are unable to see a new datastore. vmkping 1472 and 8972 are KO. If i disable jumbo on msa, it's OK !!?

If i change ESXi hosts in 1500 and i disable jumbo on MSA, vmkping 1472 are KO !! Il have to set "enable jumbo" on msa to obtain vmkping KO.

QuUestion
==========
What is the value of jumbo MSA ?
Is MSA capable to receive jumbo vmware of 8972 ?
Why disable option of jumbo on MSA disconnect commection with 1500 MTU ?


Any ideas would be welcome !

9 REPLIES 9
StorageMike
HPE Pro

Re: iSCSI jumbo frame MSA 2050

Hi

Take a look at the following https://psnow.ext.hpe.com/doc/a00015961enw?jumpid=in_lit-psnow-red (page 43)

 

I work for HPE

Accept or Kudo

CedricB
Occasional Advisor

Re: iSCSI jumbo frame MSA 2050

Hi,

Yes, I've seen it.

But i don't understand why wmkping is KO with MTU 8972. Is the value of vmware too high to be accepted by MSA ? If i disable "Enable jumbo frames" on MSA, i loose the storage with MTU 1472 of vmware. So strange.

Cali
Honored Contributor

Re: iSCSI jumbo frame MSA 2050

Hi,

I expect your Problem is in the Middle:

In the middle, we use a Nexus with MTU 1500 which is capable to transform 1500/9000 or 9000/1500

I do a lot of Configurations and many times the Customers want to have the Magic Jumo frames.

They expect always a great Performance boost.

In the end, we waste a lot of Time and the Boost is about 5-7% on most Customers.

And this only if using Performance Testing Tools.

In most Times you see no positive Performance in the Real World and get only a lot of trouble.

(If you lose one Jumbo Frame, you lose 9 times more Data as using a 1k Frame, and you have to retransmit 9x more Data. This works only perfectly using DCN Swiches with no side Traffic.)

Cali


======================
That was not planned in this way.
ArunKKR
HPE Pro

Re: iSCSI jumbo frame MSA 2050

Hi,

 

The current maximum frame size supported by MSA is 1400 for a normal frame and 8900 for a jumbo frame.

 

Please change frame size to below 1400 while jumbo frame is disabled and below 8900 while jumbo frame is enabled in ESXI. This should help resolve the issue that you are facing. Either enable jumbo frames end to end or disable it end to end including the switch.

 

 

 

I am an HPE Employee

Accept or Kudo

Wolf-P
Occasional Advisor

Re: iSCSI jumbo frame MSA 2050

I cannot tell you exactly the Jumbo frames support for MSA 2050. But for MSA 1060, 2060 and 2062 Jumbo frames mean a MTU of exactly 9000 and you should configure that on Vmware hosts with this value. Of course any intermediate switch has to be be configured for the same MTU (or more), but not less than that. Yes and on VMware the vSwitch and the vmkernel adapter you use, have to be set to the same MTU for a successful connection.

On switches you can always configure a higher MTU as needed, but never less, as most switches do not fragment packets, if needed, but have no problem, to transmit packets shorter than their MTU.

Of course a MTU of 9000 does not mean, that the payload is 9000, but that the total packet size includung header overhead is 9000.

A MTU of 9000 corresponds to a ping size on Windows or Linux (or a vmkping size on Vmware) of 8972 and a MTU of 1500 corresponds to a ping size of 1472 on these platforms, because the ICMP reply packets are always 8 bytes longer than the original packet and the minmal header size for those packets is 20, which adds to the number of 28, you have to subtract from the MTU for a successful ping reply. Windows deducts the elongation of the reply packets from the length of the reply packets and therefore shows you an identical packet size for sent and received packets, whereas Linux and Vmware don't do that, therefore you see a reply packet size of always  8 bytes more than the sent packet size is.

I hope that explains the questions of the OP.

 

 

CedricB
Occasional Advisor

Re: iSCSI jumbo frame MSA 2050

Hello,

Thank you for your explanations and advices ! Very helpfull to understand. 

In fact :

1/ vDS in 1500, Nexus in 1500 and MSA in 1500 = KO

2/ vDS in 1500, Nexus in 1500 and MSA in 9000 = OK

Could it be possible that MTU 1472 from vSphere is not accepted by MSA, i mean too long, as MSA doesn't accept more than 1400 ?

Wolf-P
Occasional Advisor

Re: iSCSI jumbo frame MSA 2050

Have you ever tried to set the Nexus to accept Jumboframes (I think that's a MTU of 9216 on those devices)? If you have a special need to change the MTU on the switch, as far as I know, you can set the MTU different for each port.

But did you test, which is the maximum supported MTU in your not working cases? If you are using VLANs, the (tagged) packets are bigger on the switch as in the untagged connections.

It is my impression, that the MSA supports a MTU of 9000 on the initiator side and it has to support a MTU of 1500 to be compliant with standards. But the MSAs largest payload it sends is only 1400 or 8900 bytes, to avoid fragmentation, if the packets (e.g. for remote replication) have to go through tunnels and other encapsulations.

If the MTU is not set at the initiator and the MSA (which means MTU = 1500) you should have a working connection in any case. If that does not work, your problem lies probably in between those endpoints and there is the switch.

If your host and MSA are not too far away physically you could try to connect with a direct cable. That would avoid any problems with the switch.

 

CedricB
Occasional Advisor

Re: iSCSI jumbo frame MSA 2050

I have tested with a MTU of 9000 ont he nexus, but it is not per port. So, not possibe... But my short test was KO. 

What i don't understand is that if the problem exist with 9000, it exists with MTU 1500 too !

Wolf-P
Occasional Advisor

Re: iSCSI jumbo frame MSA 2050

Setting the MTU or framesize to 9000 on the switch is not enough. Use the maximum framesize, the switch supports. These are somewhere between 9150 to 9300 and with Cisco it is 9216, if I remember correct. The additional framesize is to accomodate the overhead in the switch like VLAN tagging etc. . The MTU on the Vmware side of 9000 has a real capacity of a little bit over 9000, because if you e.g. enable VLAN tagging on the Vmware side the load length needs no adaptation, to the additional size of the frame overhead.

Your strange situation is surely a matter of the switch settings. As you say it does only work, if you enable Jumbo frames on one side and disable it on the other side - and it does not work, if you enable Jumbo frames on both sides or if you disable Jumbo frames on both sides. This means, that it only works, if the switch is fragmenting the packets  between both sides. An unfragmented traffic through the switch seems not to work in your setup.

BTW - at the moment  I have no Cisco Nexus switches - but I am pretty sure, that on switches of that range/class the framesize can be set on a per port basis.