Operating System - Linux
1830864 Members
2548 Online
110017 Solutions
New Discussion

3 Nic`s in a DL380 = bad performance

 
SOLVED
Go to solution
saavik
Advisor

3 Nic`s in a DL380 = bad performance

Hello !

I have an hp DL380 using SLES9.

I have 3 NIC`s installed.

0000:03:01.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)
Subsystem: Compaq Computer Corporation NC7782 Gigabit Server Adapter (PC I-X, 10,100,1000-T)
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 25
Memory at fddf0000 (64-bit, non-prefetchable)
Capabilities: [40] Capabilities: [48] Power Management version 2
Capabilities: [50] Vital Product Data
Capabilities: [58] Message Signalled Interrupts: 64bit+ Queue=0/3 Enable -

0000:03:01.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)
Subsystem: Compaq Computer Corporation NC7782 Gigabit Server Adapter (PC I-X, 10,100,1000-T)
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 26
Memory at fdde0000 (64-bit, non-prefetchable)
Capabilities: [40] Capabilities: [48] Power Management version 2
Capabilities: [50] Vital Product Data
Capabilities: [58] Message Signalled Interrupts: 64bit+ Queue=0/3 Enable -

0000:06:01.0 Ethernet controller: Intel Corp. 82545GM Gigabit Ethernet Controlle r (rev 04)
Subsystem: Intel Corp. PRO/1000 MF Server Adapter
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 74
Memory at fdfe0000 (64-bit, non-prefetchable)
Memory at fdf80000 (64-bit, non-prefetchable) [size=256K]
I/O ports at 5000 [size=64]
Capabilities: [dc] Power Management version 2
Capabilities: [e4] PCI-X non-bridge device.
Capabilities: [f0] Message Signalled Interrupts: 64bit+ Queue=0/0 Enable

2 NIC`s are activated.

eth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx
inet addr:xx.xx.xx.xx Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: xx-xx-xx-xx-xx Scope:Link
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18582 errors:0 dropped:0 overruns:0 frame:0
TX packets:15082 errors:0 dropped:0 overruns:0 carrier:0
collisions:4057 txqueuelen:1000
RX bytes:1535437 (1.4 Mb) TX bytes:22089994 (21.0 Mb)
Interrupt:25

eth2 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx
inet addr:xx.xx.xx.xx Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: xx.xx.xx.xx.xx.xx/xx Scope:Link
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14201175 errors:0 dropped:0 overruns:0 frame:0
TX packets:11976847 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9679183158 (9230.7 Mb) TX bytes:9946898037 (9486.1 Mb)
Base address:0x5000 Memory:fdfe0000-fe000000

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:9370 errors:0 dropped:0 overruns:0 frame:0
TX packets:9370 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:529135 (516.7 Kb) TX bytes:529135 (516.7 Kb)

At this configuration I have a performance of 58kb per second via ftp.

If I do :

ifconfig eth0 down,

I get a performance of 11,x MB per second.

Why is that ?
7 REPLIES 7
Vitaly Karasik_1
Honored Contributor

Re: 3 Nic`s in a DL380 = bad performance

did you configure two NICs with IPs in the same network? it's not good...
saavik
Advisor

Re: 3 Nic`s in a DL380 = bad performance

Yes i did !

I see the problem, but can`t that be solved by a good route ????

Vitaly Karasik_1
Honored Contributor
Solution

Re: 3 Nic`s in a DL380 = bad performance

You should use bonding, not routing tricks in order to twice your bandwidth.
(see RH manual for bonding http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/ref-guide/s1-networkscripts-interfaces.html#S2-NETWORKSCRIPTS-INTERFACES-CHAN, Suse should provide something similar)

But, IMHO, in 99% you won't be able to utilize even 1GB.
Steven E. Protter
Exalted Contributor

Re: 3 Nic`s in a DL380 = bad performance

Linux on Intel does not have a problem two nic cards on the same network. If they are supported native by the kernel you might be able to bond them.

Network Peformance issues come from the followng general sources:

1) Cableing, make sure its good cat 5
2) Switches. Makes sure they are gigabit and the ports the Linux boxes are on support high speed connection.
3) Configuration. Sometimes you need to tell the configuration in /etc/sysconfig/network-scripts what speed you want the card to run at. miitool and ethtool will assist you in assessing this suggestion.

The fact that bringing eth1 down improves performance indicates this problem may not be on your Linux machine and may be in the realm of network switch and infrastructure configuration.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Vitaly Karasik_1
Honored Contributor

Re: 3 Nic`s in a DL380 = bad performance

>Linux on Intel does not have a problem two >nic cards on the same network
Steven, can you point me to some doc about such config?
Dave Falloon
Trusted Contributor

Re: 3 Nic`s in a DL380 = bad performance

Because traffic will flow out the iface of the default route in most cases, you may need to enable ip forwarding on the machine so the default iface can hand packets to the secondary interface.

That only fixes the symptom though, you are WAY better off bonding the NICs and running two IP aliases on the bonded interface. In addition to fixing your route problems you get fail over so services bound to either IP are still available after a hardware failure.

--Dave
Clothes make the man, Naked people have little to no effect on society
saavik
Advisor

Re: 3 Nic`s in a DL380 = bad performance

So here is what I finally tested:

1.) Only one NIC (Intel Fiber) NIC Link is Up 1000 Mbps Full Duplex

Made ftp download from 2 Clients at the same time. Both get about 11 Mb/s

2.) aktivate second eth-nic Link is up at 100 Mbps, full duplex.

Made ftp download from 2 Clients at the same time (using the old ip form the first card)

Both get only about 5 Mb/s

=========================================

So it finally seems to me that (maybe only in my special case) it is a problem using 2 nics in the same pc with connection to the same subnet.

Well, there really seems to be no way around bonding the 2 nics!