- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- NIC Bond setup - which option is best
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-05-2012 01:34 PM
06-05-2012 01:34 PM
NIC Bond setup - which option is best
I have three p4300 nodes in a cluster, two are old, been in for over 2 years, and one is brand new. All are G2.
I added the new one in today and the restripe started. I noticed a best practice warning in the cluster regarding the network bond setup.
Apparently when I setup the two old nodes I chose 802.3ad for the bond type, and on the new one I went with ALB. Seems ALB is recommended option.
What is required if you want to use 802.3ad? Also, what is required to change the bond type, just break the bond and recreate it?
All nodes are in a stack of Cisco Catalyst 3750 switches. (3 switches acting like 1).
There are no ether-channels setup for the ports dedicated to the p4300 nodes.
p4300's are all running SAN/IQ 9.5.
I was going to change the older nodes to use ALB, but have to wait for downtime to do that.
If anyone has any input on this I would appreciate it.
Thanks,
- Tags:
- NIC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2012 07:16 AM
06-06-2012 07:16 AM
Re: NIC Bond setup - which option is best
I just put in 4 4500 nodes and did testing between LACP and ALB. I found that ALB's throughput was about 30MB/s faster than the LACP or 802.3ad configuration.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2012 11:09 AM
06-06-2012 11:09 AM
Re: NIC Bond setup - which option is best
ccavanna, thanks for the info. That helps.
Now, does anyone know of a good way to make this change? I would assume it's breaking the bond and recreating the bond. Just wondering what that would do to communications.
Seems that it should be OK, and maybe a blip whenever I change the node hosting the VIP.
Any other thoughts?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-13-2012 11:38 PM
06-13-2012 11:38 PM
Re: NIC Bond setup - which option is best
Hey!
From: VMware and HP Bestpractise
You can see that ALB is fine but LACP, if your network design can support it, is giving you better write performance.
So, if you can, go LACP.
/Christian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-22-2012 08:24 PM
06-22-2012 08:24 PM
Re: NIC Bond setup - which option is best
The best bonding option is to leverage Link Aggregation Control
Protocol (LACP) 802.3AD if the switching infrastructure supports it. From the storage node, LACP
bonding supports both sending and receiving data from both adapters. Network teams on vSphere
This information is not correct.
Saniq 9.x uses Linux balance-alb or 6 Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation.
So it recieves and transmits on both NIC's.
if you are using ALB now run diagnostics and download the ifconfighist.log and you will see that it balance both recieve an d transmit.
I used to be advocate of link agg but after seeing the numbers for myself and the additional work in setting up the trunk group I am not so sure if it is worth the trouble.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-19-2012 12:37 AM
07-19-2012 12:37 AM
Re: NIC Bond setup - which option is best
@Emilo wrote:The best bonding option is to leverage Link Aggregation Control
Protocol (LACP) 802.3AD if the switching infrastructure supports it. From the storage node, LACP
bonding supports both sending and receiving data from both adapters. Network teams on vSphere
This information is not correct.
Saniq 9.x uses Linux balance-alb or 6 Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation.
So it recieves and transmits on both NIC's.
if you are using ALB now run diagnostics and download the ifconfighist.log and you will see that it balance both recieve an d transmit.
I used to be advocate of link agg but after seeing the numbers for myself and the additional work in setting up the trunk group I am not so sure if it is worth the trouble.
How does "The receive load balancing is achieved by ARP negotiation" work? - Is it sending G-ARPS or what. And wont the "flow" be affected if chaning mac adress in the middle of the stream? something you dont have when LACP-ing?
/Christian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-21-2012 07:59 AM
07-21-2012 07:59 AM
Re: NIC Bond setup - which option is best
Hello Chritian and thanks for reading and commenting on my post.
If you have Link Agg setup and are happy with the results continue to use it. I am not trying to convince anyone to switch from one bonding method to the other. However with the implementation of Saniq 9.0 and above uses balance-alb or 6
http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Bonding_Driver_Options
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation
When an ALB bond is configured, it creates an interface. This interface balances traffic through both nics. But how will this work with the iSCSI protocol? In RFC 3270 (http://www.ietf.org/rfc/rfc3720.txt) iSCSI uses command connection allegiance;
For any iSCSI request issued over a TCP connection, the corresponding response and/or other related PDU(s) MUST be sent over the same connection. We call this “connection allegiance”.
Hope this helps
You can also see this if you look at the logs just look histifconfig.log
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2012 03:50 AM
07-23-2012 03:50 AM
Re: NIC Bond setup - which option is best
reading the referred article I get the feeling that the loadbalancing is done per IP / ARP - thus multiple IPs spread over the nic:s in the bond.
Thus, if one IP needs more than 1GbE it wont get that if not using LACP bond. I.e. presenting one MAC all-the-time.
Right?
/Christian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-23-2012 12:42 PM
07-23-2012 12:42 PM
Re: NIC Bond setup - which option is best
All bonds only use 1GB at a time or at the same time no bond will give 2GB.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-25-2012 01:41 AM
07-25-2012 01:41 AM
Re: NIC Bond setup - which option is best
Hmm. This im not so sure of. My impression is that when going LACP - you get an NEW MAC adress represented - from witch both send and receive is done. And this allows you to fully use BOTH nics bandwith (ie. 2GbE) in communicating. both receive wise and send wise...
As ALB will give you 2 GbE send and only 1GbE at max receive....
Would you say otherwise?
Kind regards,
Christian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-25-2012 08:09 AM
07-25-2012 08:09 AM
Re: NIC Bond setup - which option is best
Yes I would say otherwise.
As stated in my previous post ALB does load balancing both transmit and recieve.
Link Agg does the same, the only difference is that with Link Agg it does use only 1 MAC address for both NICs by using the driver to add the same mac when it builds the packet.
However with either implementation it is impossible to get true 2 GB performance
I dont want to get to technical but here is some information for you regarding the standard
43.2.4 Frame Distributor
…
This standard does not mandate any particular distribution algorithm(s);
however, any distribution algorithm shall ensure that, when frames are received
by a Frame Collector as specified in 43.2.3, the algorithm shall not cause
a) Mis-ordering of frames that are part of any given conversation, or
b) Duplication of frames.
The above requirement to maintain frame ordering is met by ensuring that all
frames that compose a given conversation are transmitted on a single link in
the order that they are generated by the MAC Client; hence, this requirement
does not involve the addition (or modification) of any information to the MAC
frame, nor any buffering or processing on the part of the corresponding Frame
Collector in order to re-order frames.
So becasue of that
– It does what it was intended to do
– It is relatively easy to implement and use
• Does not always provide a linear multiple of the data rate
of a single link
– N aggregated links usually do not provide
N times the bandwidth
hope this help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-25-2012 11:37 PM
07-25-2012 11:37 PM
Re: NIC Bond setup - which option is best
In other words load on LACP trunk ports is not round-robin balanced per packet (which in my opinion is only option to spread load equally on all ports), but per src MAC (or on more advanced switches there is an option to choose balancing based on src-dst MAC, src IP, dst IP, src-dst IP addresses pair, TCP port numbers, etc). In any case usually this means that iSCSI traffic from any particular iniciator will always go through one particular 1G link in LACP trunk. Of course, if there are a few iSCSI iniciators, load from them will distribute through all links of LACP trunk. But if one iniciator will saturate bandwith, other iniciators, maped on same LACP trunk link, will starve, even if other trunk links are unloaded.
I'm not big linux networking guru, so I can't tell how P4500 node balance traffic on LACP trunk. But I guess alghorithm does not differ much and balancing is the same.
So going back to original question - I believe best option is go with ALB and connect 2 NICs to different switches. You will get both bandwith advantage and reliability (you can always take offline one switch for maintenance, etc). Of course, keep in mind that connection between switches needs to be carefully monitored - it is easy to overload 1G link between them (or keeping in mind above said, any 1G link in LACP trunk between them).
Gediminas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2012 06:37 AM
08-02-2012 06:37 AM
Re: NIC Bond setup - which option is best
It really depends on your switching environment. By far the Easiest is ALB which works best for most scenarios.
In the event you have a more advanced switching environment, in our case we are running ProCurve 5400's and 8200's with distributed trunking configured so LACP is the best method for us.
By "Best Method" I'm looking at the 99.9...% availability scenario. To this day the P4500's even with the current SAN/iQ are not hitless when they re-join the cluster because of lost quorum. I've seen too many times when a switch reboot when using ALB causes a resync event which really screws up some systems. So if you need the network availability above all else use LACP.
ALB requires any old switch which is good on the wallet and offers slightly better performance in terms of raw throughput to a single host.
When testing the many-to-many scenario does ALB or LACP work better? It depends on your environment.
For me, if ALB doesn't provide enough bandwidth then look jumbo frames to sqeeze the last bit out of link. If the extra 5% it's time to look at 10Gig.