- Community Home
- >
- Networking
- >
- Switching and Routing
- >
- HPE Aruba Networking & ProVision-based
- >
- increase aggregated gigabit nic throughput from so...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2010 04:24 AM - last edited on 03-03-2014 06:11 PM by Maiko-I
09-26-2010 04:24 AM - last edited on 03-03-2014 06:11 PM by Maiko-I
increase aggregated gigabit nic throughput from source to destination
What methods are available to implement hash based packet port load balancing to round robin in sequence communication allowing two hosts to realize greater than gigabit speeds?
Using several intel gigabit nics I would like to increase transfer speed for zfs sending/receiving from one machine to another.
I have tested lacp-active aggregated dual nics between two opensolaris b134 machines connected by a hp procurve 2810-48g switch with lacp and flow control enabled on the aggregated links. Using mbuffer zfs send/receive transfer speeds remain constrained at 105MBytes/sec.
What is the effect of setting lacp rather than trunk? .. flow control on or off?
Thanks for any help.
P.S. This thread has been moevd from Switches, Hubs, Modems (Legacy ITRC forum) to ProCurve / ProVision-Based. - Hp Forum Moderator
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-05-2010 10:31 AM
10-05-2010 10:31 AM
Re: increase aggregated gigabit nic throughput from source to destination
lacp mode active or passive allow you to use standby links if you need 8 links always in trunk.
But it also have some limitations. You can't apply some confugurations to dynamic lacp trunk.
In general it's better to use static trunking.
If you use "trunk 21-25 trk1 lacp" or "trunk 21-25 trk1 trunk" command you will get the same results. You will create a static trunk.
Dynamic and static trunking use the same SA/DA load-balancing method.
And also this discussion may help you http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1173151