- Community Home
- >
- Servers and Operating Systems
- >
- Operating System - Linux
- >
- Networking
- >
- Hadoop 10GBE network configuration on DL380 Gen9 u...
-
-
Forums
- Products
- Servers and Operating Systems
- Storage
- Software
- Services
- HPE GreenLake
- Company
- Events
- Webinars
- Partner Solutions and Certifications
- Local Language
- China - 简体中文
- Japan - 日本語
- Korea - 한국어
- Taiwan - 繁體中文
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Latin America
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Blog, Poland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2017 05:08 AM
01-16-2017 05:08 AM
Hadoop 10GBE network configuration on DL380 Gen9 using JBOD
We have configured 12 x DL380 Gen9 in Hadoop enviornment using two HPE 5900AF-48XGT-4QSFP+ switches in IRF mode and make 1 logical switch ,used both switches ports to configure LACP. 16 servers (DL380 Gen9) are connected with the switches 2 ports with each switch total ports per server are four.
we aggregate 4 ports, two ports from each switch to get the desired 40Gbps. Now when we transfer file from 1 server to another resides on same switch the transfer rate is 250 – 300 MBps.
Even if we remove LACP the result remains same.
On server side we have configured Bonding with mode 802.3ad and change it to even on mode 6 balance-alb but still we are unable to cross a single session speed from 150 -300 MBps.
server configurations are as under
Lan Cards as Slave:
ifcfg-eno49
ifcfg-eno50
ifcfg-ens2f0
ifcfg-ens2f1
Master Bond:
ifcfg-bond
Configurations of Slaves are as
[root@Data-Node-01 network-scripts]# cat ifcfg-eno49
HWADDR=14:02:EC:7E:CF:3C
TYPE=Ethernet
NAME=eno49
UUID=49e1b24c-939f-41cd-bb07-150405287bfc
DEVICE=eno49
ONBOOT=yes
MASTER=bond0
SLAVE=yes
MTU=9000
[root@Data-Node-01 network-scripts]# cat ifcfg-eno50
HWADDR=14:02:EC:7E:CF:3D
TYPE=Ethernet
NAME=eno50
UUID=9d339cc4-fa92-4bfd-9144-1ed3471dd5f8
DEVICE=eno50
ONBOOT=yes
MASTER=bond0
SLAVE=yes
MTU=9000
[root@Data-Node-01 network-scripts]# cat ifcfg-ens2f0
HWADDR=14:02:EC:83:73:48
TYPE=Ethernet
NAME=ens2f0
UUID=761072c0-6d32-4080-9a34-0a4d4d4a0371
DEVICE=ens2f0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
MTU=9000
[root@Data-Node-01 network-scripts]# cat ifcfg-ens2f1
HWADDR=14:02:EC:83:73:49
TYPE=Ethernet
NAME=ens2f1
UUID=0c049364-2a97-4cde-9b1f-bdddebdb3998
DEVICE=ens2f1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
MTU=9000
And Configuration of Master is as
[root@Data-Node-01 network-scripts]# cat ifcfg-bond
DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DNS1=10.50.28.1
NAME=bond
UUID=303f279d-31f2-49bb-8bf6-d46f13a54182
ONBOOT=yes
MTU=9000
#IPV6_PEERDNS=yes
#IPV6_PEERROUTES=yes
#IPV6_PRIVACY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
BONDING_OPTS="miimon=100 updelay=0 downdelay=0 mode=802.3ad"
IPADDR=10.1.1.67
PREFIX=24
GATEWAY=10.1.1.1
Please help me how to increase the bandwidth of aggrigated 40GB from node 1 to node 2 and so?
Regards,
Asif
Asif Sharif
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-23-2017 03:03 PM
02-23-2017 03:03 PM
Re: Hadoop 10GBE network configuration on DL380 Gen9 using JBOD
Hi,
if you still have issue, you can make you config like bellow;
ensure that Bond0 interface has no IP, just virtual interface have.
VLAN 117 will be customer network IP. You can use same IP for Cluster Private network. with that config you can reach 2.2GB per node. Hope will help in your config. If you still have issue let me know. on the Switch site there are some config for port you use such as hybrid port mode and native VLAN ...
# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
USERCTL=no
BOOTPROTO=none
IPADDR=
NETMASK=
BONDING_OPTS="mode=balance-rr primary=eno49 miimon=500 updelay=1000"
IPV6INIT=no
ONBOOT=yes
MTU=9000
# cat /etc/sysconfig/network-scripts/ifcfg-bond0.117
DEVICE=bond0.117
USERCTL=no
BOOTPROTO=none
IPADDR=10.117.0.50
NETMASK=255.255.0.0
GATEWAY=10.117.0.1
BONDING_OPTS="mode=balance-rr primary=eno49 miimon=500 updelay=1000"
IPV6INIT=no
ONBOOT=yes
MTU=9000
VLAN=yes
regards
Bilgin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2017 04:09 AM
03-03-2017 04:09 AM
Re: Hadoop 10GBE network configuration on DL380 Gen9 using JBOD
Hint...
You may need to check on "xmit hash policy". Take a look at this Red Hat article which explains that clubbing two 1Gb nics would not produce 2Gb bandwidth.
https://access.redhat.com/solutions/328453
I see that "layer2+3" hash polcy would yield more balanced output compared to the default hash policy of "layer2".
https://www.kernel.org/doc/Documentation/networking/bonding.txt
SimplyLinuxFAQ
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2022 Hewlett Packard Enterprise Development LP