Operating System - Linux
1753797 Members
7153 Online
108805 Solutions
New Discussion юеВ

Network speed slow in network teaming

 
Dess_sg
Occasional Contributor

Network speed slow in network teaming

Hi to all the expert,

i have a RHEL5 with 2 NIC card configure in network teaming.
I'm using the Gigabit NIC card. However, my statistic showed that the throughput is throttle at only 100 Mbps. Below is the config files, kindly advice if there's something wrong with it.


# cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=x.x.x.x
NETMASK=x.x.x.x
IPADDR=x.x.x.x
USERCTL=no
GATEWAY=x.x.x.x
TYPE=Ethernet
IPV6INIT=no
PEERDNS=yes



# cat ifcfg-eth2
TYPE=Ethernet
DEVICE=eth2
HWADDR=00:24:81:81:c2:9a
MASTER=bond0
SLAVE=yes
BOOTPROTO=dhcp
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes



# cat ifcfg-eth3
# Please read /usr/share/doc/initscripts-*/sysconfig.txt
# for the documentation of these parameters.
TYPE=Ethernet
DEVICE=eth3
HWADDR=00:24:81:81:c2:9b
BOOTPROTO=dhcp
MASTER=bond0
SLAVE=yes
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes



# /sbin/ethtool eth2
Settings for eth2:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: pumbag
Wake-on: g
Current message level: 0x00000001 (1)
Link detected: yes



# /sbin/ethtool eth3
Settings for eth3:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000001 (1)
Link detected: yes



# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

Bonding Mode: fault-tolerance (active-backup) (fail_over_mac)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:81:81:c2:9a

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:81:81:c2:9b
11 REPLIES 11
Steven Schweda
Honored Contributor

Re: Network speed slow in network teaming

> [...] my statistic showed [...]

What, exactly, is "my statistic", and how,
exactly, was it obtained?

I hope that you realize that a "1gb/s" speed
is one giga_bit_/second, not one
giga_byte_/second.
Dess_sg
Occasional Contributor

Re: Network speed slow in network teaming

Hi Steven,

thanks for your reply. Based on my server calculation on the concurrent user, and also the data rate of the streaming service, the throughput should hit more than 100 Mbps.

And yes, I'm aware of the byte and bit difference.

thanks.
Rob Leadbeater
Honored Contributor

Re: Network speed slow in network teaming

Hi,

Looking at your configuration files, you've got the bond set to use a fixed IP address, but the individual interfaces are set to using DHCP.

I doubt this is causing the issue, but I'd change:

BOOTPROTO=dhcp
to
BOOTPROTO=none


What is the physical configuration - Server type, NIC type etc. ?

Cheers,
Rob
Steven Schweda
Honored Contributor

Re: Network speed slow in network teaming

> [...] Based on my server calculation on the
> concurrent user, and also the data rate of
> the streaming service, the throughput
> should hit more than 100 Mbps.

Which calculation is that? What does "on the
concurrent user" mean? The "data rate" of
what, exactly? Which "streaming service"?
The "throughput" of what, exactly?

> What, exactly, is "my statistic", and how,
> exactly, was it obtained?

Still wondering...
Tim Nelson
Honored Contributor

Re: Network speed slow in network teaming

I would also like to mention that NIC teaming typically gives you an aggregated bandwidth increase ( not the word aggregated)

many processes will have a bigger pipe to share but one process may not be any better, it could also be worse while the nic cards round-robin back and forth between the switch ports.

try your test with only 1 NIC attached. is it any worse ?

There are a number of posts out here claiming mode=0 round-robin'ing is NOT the best solution.

Dess_sg
Occasional Contributor

Re: Network speed slow in network teaming

Hi Tim, thanks for your reply. The bonding is not configured in round-robin. it's actually configured in Active-backup.

under /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=1 miimon=100 fail_over_mac=1
Dess_sg
Occasional Contributor

Re: Network speed slow in network teaming

Hi Rob, thanks for your reply too. I have changed from dhcp to none. didn't see any significant increase in throughput.

What do you mean by physical configuration type?
Rob Leadbeater
Honored Contributor

Re: Network speed slow in network teaming

What type NICs are they ?

If they're Broadcom be aware of Red Hat KB article: DOC-36394.

https://access.redhat.com/kb/docs/DOC-36394

"Slow network transfers with bnx2x NICs on Red Hat Enterprise Linux 5.5"

Cheers,

Rob
rick jones
Honored Contributor

Re: Network speed slow in network teaming

It really would help if you would describe the nature of your testing in greater detail. For example, is it just a single stream, or more than one stream of traffic? What is the destination of this traffic? Is it, by any chance, a 100Mb connected system? Is there, by any chance a 100Mb link somewhere in the middle of everything?

Broadly speaking, short of using round-robin mode, bonding will only improve the throughput of aggregates of connections - any single TCP connection, or what the bonding software considers a "flow" will not go faster than a single link. Also, while the bonding software controls what happens on the way out, it does not control what happens on the way in - that is the province of the switch(es). Switches may have a very different packet scheduling algorithm (idea of what defines a "flow") from the bonding software. And as always, even if the bonding spreads a single connection out, if the destination is connected with but a single link, *that* will be the gating factor.
there is no rest for the wicked yet the virtuous have no pillows