StoreVirtual Storage
1753823 Members
8755 Online
108805 Solutions
New Discussion

Re: Very poor performance on our 10 node Lefthand Cluster

 
Bart_Heungens
Honored Contributor

Re: Very poor performance on our 10 node Lefthand Cluster

I agree on the switches... The absolute minimum is the 2920 because of the port buffers for iSCSI connectivity...

 

--------------------------------------------------------------------------------
If my post was useful, clik on my KUDOS! "White Star" !
Peter J West
Frequent Advisor

Re: Very poor performance on our 10 node Lefthand Cluster

Thanks for the comments everyone.

 

I will speak to our vendor and see what we can do - i'll hold this open for now and update once I have more news.

 

Sollievo
New Member

Re: Very poor performance on our 10 node Lefthand Cluster

I wanted to post on this because I’ve been troubleshooting this same issue with a cluster of 4530 nodes on version 11.0.

We deleted our bonds, set flow control and mtu on each interface, rebooted, created the bond, rebooted.

Flow control still shows off, but the real issue is that when we run iperf between two 4530 nodes, we get verry poor performance with the command given above. 59-100 Mbits/sec. It shows MTU of 8988. When we change MTU to something around 7000, improvement to about 600Mbits/sec, and with MTU at 1328, almost 800 Mbits/sec. I think it is a bug, and the software is not actually setting the MTU to 9000. There are 4500 nodes and 4730 nodes that are set exactly the same and run fine, and show flow control as on/on. Hope this helps someone else. We have not determined if it is the binding or setting it from CMC that is the root. Still evacuating the datastores.