BladeSystem Virtual Connect
Showing results for 
Search instead for 
Did you mean: 

iScsi issues over stacking links

Occasional Advisor

iScsi issues over stacking links

We have two enclosures with a bunch of bl460 G6 blades.


In one enclosure we have 2x 1/10Gb-F modules and in the second enclosure we have 2x 10Gb Flex-10 modules.

We have a single VC domain and 2x CX-4 stacking links joining both enclosures.


We have been sharing network profiles accross any blades in the enclosures and all blades run ESXi 5.5 and can pass network traffic using uplinks on either enclosure without issue.


Recently we shared an iScsi SSD SAN which was running on the second enclosure with 2x 10Gb SFP+ links into the Flex-10 modules with the blades in the first enclosure so they are accessing iScsi via the stacking links.


The performance is terible, reads are all over the place between 20MBs to 100MBs and traffice is really scatty.

Writes are not much better between 60MBs and 100MBs and jumping up and down constantly with no steady flow of traffic.

Sometimes the throughput drops off to 5-10MBs and then jumps to 50-60MBs and back down to 5-6MBs.


The blades in the second enclosure which have local access to the SAN (not over the stacking links) can pull data at 1GBs to 1.2GBs with very steady throughput. Lowest it ever gets to is 900MBs reads and 800MBs writes so I know the network config and SAN in general are fine.


I pulled one blade from the first enclosure and inserted it in the second enclosure and iScsi traffic instantly jumped to the ~1GBs range.


I pulled a different blade from the second enclosure and put it in the first enclosure and iScsi traffic was back to the crappy 10-MB-20MBs range so i know it's nothing to do with blade hardware or ESXi config or jumbo frames etc. because all that config just moves with the blade.


Has anyone else any experience of issues with traffic flowing over stacking links being really really **bleep**ty compared to traffic flow within an anclosure and out to local uplinks?


Could this be a flow control problem or something similar?


Really need to get this one resolved as it'c becoming a big issue performance wise.



Occasional Advisor

Re: iScsi issues over stacking links

So a couple of weeks have passed and we are still no furter with this issue.


I've read on the forums that we should try increasing the buffers if experiencing poor RX performance


- Modify the packet buffer overallocation ratio to 2:

    ->set advanced-networking PacketBufferOverallocationRatio=2


However, changing this setting is potntially disruptive.


Has anyone anyone changed this setting before? Any real experience of just how disruptive it is?

Do links drop and reset or does traffi just pause for a few moments?


Would we lose iscsi links nd potentially drop storage to our HVs? What about WAN uplinks, we have a pair of uplinks to the DC through this switch, would these connections drop and reset?


Thanks for any feedback.