- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- iScsi issues over stacking links
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-17-2015 08:16 AM - edited 08-17-2015 08:31 AM
08-17-2015 08:16 AM - edited 08-17-2015 08:31 AM
iScsi issues over stacking links
We have two enclosures with a bunch of bl460 G6 blades.
In one enclosure we have 2x 1/10Gb-F modules and in the second enclosure we have 2x 10Gb Flex-10 modules.
We have a single VC domain and 2x CX-4 stacking links joining both enclosures.
We have been sharing network profiles accross any blades in the enclosures and all blades run ESXi 5.5 and can pass network traffic using uplinks on either enclosure without issue.
Recently we shared an iScsi SSD SAN which was running on the second enclosure with 2x 10Gb SFP+ links into the Flex-10 modules with the blades in the first enclosure so they are accessing iScsi via the stacking links.
The performance is terible, reads are all over the place between 20MBs to 100MBs and traffice is really scatty.
Writes are not much better between 60MBs and 100MBs and jumping up and down constantly with no steady flow of traffic.
Sometimes the throughput drops off to 5-10MBs and then jumps to 50-60MBs and back down to 5-6MBs.
The blades in the second enclosure which have local access to the SAN (not over the stacking links) can pull data at 1GBs to 1.2GBs with very steady throughput. Lowest it ever gets to is 900MBs reads and 800MBs writes so I know the network config and SAN in general are fine.
I pulled one blade from the first enclosure and inserted it in the second enclosure and iScsi traffic instantly jumped to the ~1GBs range.
I pulled a different blade from the second enclosure and put it in the first enclosure and iScsi traffic was back to the crappy 10-MB-20MBs range so i know it's nothing to do with blade hardware or ESXi config or jumbo frames etc. because all that config just moves with the blade.
Has anyone else any experience of issues with traffic flowing over stacking links being really really **bleep**ty compared to traffic flow within an anclosure and out to local uplinks?
Could this be a flow control problem or something similar?
Really need to get this one resolved as it'c becoming a big issue performance wise.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2015 01:44 AM - edited 09-27-2015 12:02 PM
09-26-2015 01:44 AM - edited 09-27-2015 12:02 PM
Re: iScsi issues over stacking links
So a couple of weeks have passed and we are still no furter with this issue.
I've read on the forums that we should try increasing the buffers if experiencing poor RX performance
- Modify the packet buffer overallocation ratio to 2:
->set advanced-networking PacketBufferOverallocationRatio=2
However, changing this setting is potntially disruptive.
Has anyone anyone changed this setting before? Any real experience of just how disruptive it is?
Do links drop and reset or does traffi just pause for a few moments?
Would we lose iscsi links nd potentially drop storage to our HVs? What about WAN uplinks, we have a pair of uplinks to the DC through this switch, would these connections drop and reset?
Thanks for any feedback.