- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: NIC Bond setup - which option is best
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-25-2012 08:09 AM
07-25-2012 08:09 AM
Re: NIC Bond setup - which option is best
Yes I would say otherwise.
As stated in my previous post ALB does load balancing both transmit and recieve.
Link Agg does the same, the only difference is that with Link Agg it does use only 1 MAC address for both NICs by using the driver to add the same mac when it builds the packet.
However with either implementation it is impossible to get true 2 GB performance
I dont want to get to technical but here is some information for you regarding the standard
43.2.4 Frame Distributor
…
This standard does not mandate any particular distribution algorithm(s);
however, any distribution algorithm shall ensure that, when frames are received
by a Frame Collector as specified in 43.2.3, the algorithm shall not cause
a) Mis-ordering of frames that are part of any given conversation, or
b) Duplication of frames.
The above requirement to maintain frame ordering is met by ensuring that all
frames that compose a given conversation are transmitted on a single link in
the order that they are generated by the MAC Client; hence, this requirement
does not involve the addition (or modification) of any information to the MAC
frame, nor any buffering or processing on the part of the corresponding Frame
Collector in order to re-order frames.
So becasue of that
– It does what it was intended to do
– It is relatively easy to implement and use
• Does not always provide a linear multiple of the data rate
of a single link
– N aggregated links usually do not provide
N times the bandwidth
hope this help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-25-2012 11:37 PM
07-25-2012 11:37 PM
Re: NIC Bond setup - which option is best
In other words load on LACP trunk ports is not round-robin balanced per packet (which in my opinion is only option to spread load equally on all ports), but per src MAC (or on more advanced switches there is an option to choose balancing based on src-dst MAC, src IP, dst IP, src-dst IP addresses pair, TCP port numbers, etc). In any case usually this means that iSCSI traffic from any particular iniciator will always go through one particular 1G link in LACP trunk. Of course, if there are a few iSCSI iniciators, load from them will distribute through all links of LACP trunk. But if one iniciator will saturate bandwith, other iniciators, maped on same LACP trunk link, will starve, even if other trunk links are unloaded.
I'm not big linux networking guru, so I can't tell how P4500 node balance traffic on LACP trunk. But I guess alghorithm does not differ much and balancing is the same.
So going back to original question - I believe best option is go with ALB and connect 2 NICs to different switches. You will get both bandwith advantage and reliability (you can always take offline one switch for maintenance, etc). Of course, keep in mind that connection between switches needs to be carefully monitored - it is easy to overload 1G link between them (or keeping in mind above said, any 1G link in LACP trunk between them).
Gediminas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2012 06:37 AM
08-02-2012 06:37 AM
Re: NIC Bond setup - which option is best
It really depends on your switching environment. By far the Easiest is ALB which works best for most scenarios.
In the event you have a more advanced switching environment, in our case we are running ProCurve 5400's and 8200's with distributed trunking configured so LACP is the best method for us.
By "Best Method" I'm looking at the 99.9...% availability scenario. To this day the P4500's even with the current SAN/iQ are not hitless when they re-join the cluster because of lost quorum. I've seen too many times when a switch reboot when using ALB causes a resync event which really screws up some systems. So if you need the network availability above all else use LACP.
ALB requires any old switch which is good on the wallet and offers slightly better performance in terms of raw throughput to a single host.
When testing the many-to-many scenario does ALB or LACP work better? It depends on your environment.
For me, if ALB doesn't provide enough bandwidth then look jumbo frames to sqeeze the last bit out of link. If the extra 5% it's time to look at 10Gig.
- « Previous
-
- 1
- 2
- Next »