- Community Home
- >
- Networking
- >
- Switching and Routing
- >
- Web and Unmanaged
- >
- Re: 1810-24G LACP/Trunking problem
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-10-2012 02:29 AM
02-10-2012 02:29 AM
1810-24G LACP/Trunking problem
Hi,
I hope someone can help.
I have two ESXi boxes each with an NC360T running MPIO with round robin to load balance accross the two NICs.
These are connected to a server via an 1810-24G running an iSCSI target which has an NC365T Quad Nic in it which is trunked to the switch.
Now, back in August this was working great with an LACP team setup with the Network config utility so that all four links from the ESXi hosts would load balance when writing (incoming) to the Iscsi Target.
With a reinstall of the iSCSI server, and a firmware upgrade to the 1810 switch, it now only EVER uses three out of the four links when uploading to the Quad NIC.
When I pull out cables and reconnect them, the switch briefly load balaces acrross all 4 NICS as it should, but after a few seconds goes back to just using three out of the 4 links.
Now, when I pull the four cables in the trunk and connect them to a configured 3com switch, it load balances the incoming traffic accross ALL four links perfectly.
Each Esxi host having 2 x 1gb connections uploads to two links on the iSCSI target. When two hosts are writing to the target, all four links are used as you would expect.
With the HP switch, one host will upload using two links, but when you upload with the other host, it will only ever use one link.
Does anyone have any idea why the 1810-24G will NOT work correctly for Recieve load balancing?!
Its driving me INSANE, as it used to work.
Now, if I disable the trunk completely and just have the links teamed, it will load balance accross all four, but the speeds are terrible, and there is no redundancy like with an LACP trunk.
LACP works as such, but ONLY with three out of four links being used for uploads/writes to the server.
I did go back to the 1.17 firmware, but this made no difference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-07-2015 11:42 AM
10-07-2015 11:42 AM
Re: 1810-24G LACP/Trunking problem
Solution for this problem were cables. I had cheap cat6 cables on my end, than I got gold plated cat6 with protection and setup completely new 4nic LACP group on the switch, everything worked well. Reached full speed of 460mb/s. Four users dragging same files at the same time getting 4x full 1Gb.
I have 1810 24g v1 and freenas 9.3 running on ML110 G6 with quad nic. No need to run static trunk on HP switch just get good quality reliable cables since LACP is very sensitive thing especially if you have more than 2 port in the group.
Hope this helps someone as well.