NIC Bonding & Teaming on CentOS 7

Posted: July 7, 2015 in Linux

Network interface bonding  combines multiple network connections into a single logical interface. A bonded network interface can increase data throughput by load balancing or can provide redundancy by allowing failover from one NIC to another.

There are following bonding types:

(balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

(active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if  the active slave fails.

 (balance-xor)
XOR policy: Transmit based on the selected transmit hash policy.

 (broadcast)
Broadcast policy: transmits everything on all slave interfaces.

 (802.3ad)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

I used KVM virtual machine with 3 NIC’s,two will be used fot bonding (eth1 & eth2)

[root@localhost ja]# ip addr

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:6d:17:fb brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:4b:f3:ed brd ff:ff:ff:ff:ff:ff

For bonding to work properly we need to load bonding driver named bonding

[root@localhost ja]# modprobe bonding

Verify if driver is loaded:

[root@localhost ja]# modinfo bonding
filename: /lib/modules/3.10.0-229.el7.x86_64/kernel/drivers/net/bonding/bonding.ko
alias: rtnl-link-bond
author: Thomas Davis, tadavis@lbl.gov and many others
description: Ethernet Channel Bonding Driver, v3.7.1
version: 3.7.1
license: GPL
rhelversion: 7.1
srcversion: 25506952906F95B699162DB
depends:
intree: Y
vermagic: 3.10.0-229.el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
sig_key: A6:2A:0E:1D:6A:6E:48:4E:9B:FD:73:68:AF:34:08:10:48:E5:35:E5
sig_hashalgo: sha256
parm: max_bonds:Max number of bonded devices (int)
parm: tx_queues:Max number of transmit queues (default = 16) (int)
parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm: miimon:Link check interval in milliseconds (int)
parm: updelay:Delay before considering link up, in milliseconds (int)
parm: downdelay:Delay before considering link down, in milliseconds (int)
parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm: mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm: primary:Primary network device to use (charp)
parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm: ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm: min_links:Minimum number of available links before turning on carrier (int)
parm: xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3 (charp)
parm: arp_interval:arp interval in milliseconds (int)
parm: arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm: arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm: all_slaves_active:Keep all frames received on an interfaceby setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm: resend_igmp:Number of IGMP membership reports to send on link failure (int)

Create UUID’s for our interfaces:

[root@localhost ja]# uuidgen eth1
6f19ffe4-99d1-4a13-92b3eb602172869b
[root@localhost ja]# uuidgen eth2
d46459b8-4500-9a0b-79f0c65ca109

Create a file named ifcfg-bond0 in the /etc/sysconfig/network-scripts directory (for bond0 interface)

TYPE=Bond   #Interface type set to bond
BOOTPROTO=static
BONDING_MASTER=yes
BONDING_OPTS="mode=active-backup"  #i set mode to active-backup 
DEFROUTE=yes 
IPADDR=192.168.122.100  #IP address of bond0 interface
NETMASK=255.255.255.0 
GATEWAY=192.168.122.1
IPV4_FAILURE_FATAL=no 
IPV6INIT=no
NAME=bond0
DEVICE=bond0
ONBOOT=yes

In same folder create files for eth1 (ifcfg-eth1) and eth2 (ifcg-eth2) interfaces.Both interfaces are in slave mode with no IP addresses

TYPE=Ethernet
MASTER=bond0
SLAVE=yes
NAME=eth1
UUID=6f19ffe4-99d1-4a13-92b3eb602172869b
DEVICE=eth1
ONBOOT=yes

 

TYPE=Ethernet
MASTER=bond0
SLAVE=yes
NAME=eth2
UUID=d46459b8-4500-9a0b-79f0c65ca109
DEVICE=eth2
ONBOOT=yes

Deactivate and reactivate bond0 and restart network service

[root@localhost ja]#ifdown bond0;ifup bond0
[root@localhost ja]#systemctl restart network
[root@localhost ja]#ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
inet 192.168.122.100  netmask 255.255.255.0  broadcast 192.168.122.255
inet6 fe80::38a5:eaff:fe70:b497  prefixlen 64  scopeid 0x20<link>
ether 3a:a5:ea:70:b4:97  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Alternatively,we can use Network Manager Command Line Tool  (nmcli) to create bonding,without editing NIC files

[root@localhost ja]#nmcli con add type bond con-name bond0 ifname bond0 mode active-backup ip4 192.168.122.100/24 gw4 192.168.122.1

This command also created an ifcfg-bond0 file in the /etc/sysconfig/network-scripts directory.

Add eth1 and eth2 interfaces as slaves to the device bond0:

[root@localhost ja]#nmcli con add type bond-slave ifname eth1 master bond0
[root@localhost ja]#nmcli con add type bond-slave ifname eth2 master bond0

This command has added eth1 and eth2 interfaces as slaves to bond0, and,at the same time,created ifcfg-
bond-slave-eth2 and ifcfg-bond-slave-eth3 files in the /etc/sysconfig/network-scripts folder

Interface teaming

Interface teaming is introduced in RHEL 7/CENTOS 7 it performs same function as NIC Bonding.
Teaming is a new feature,it handles the flows of packets more efficient than bonding does

Network teaming is implemented with a kernel driver and a user space daemon named teamd.

Software, called runners, enables load balancing

The following runners can be used:

broadcast    : transmits each packet from all ports.
roundrobin : transmits each packets in a round-robin way from each of its ports.
activebackup: failover runner which watches for link changes and select an active port for data transfer.
loadbalance: monitor traffic and uses a hash function to try to reach a perfect balance when selecting ports for packet transmission.
lacp  : Implement the 802.3ad Link Aggregation Control Protocol. It can use the same transmit port selection possibilities as the loadbalance runner.

We need to load team driver and install teamd package:

[root@localhost ja]#modprobe team
[root@localhost ja]#yum –y install teamd

I addedd two more network interfaces to VM:

[root@localhost ja]#ip addr
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:8c:33:ba brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:13:de:94 brd ff:ff:ff:ff:ff:ff

In /etc/sysconfig/nework-tools folder create file named team0

BOOTPROTO=static
TEAM_CONFIG='{"runner":{"name":"activebackup"},"link_watch":{"name":"ethtool"}}'
IPADDR=192.168.122.120
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
NAME=team0
DEVICE=team0
ONBOOT=yes
DEVICETYPE=Team
PREFIX=24

Interface type is team,interface name is team0,IP address set to 192.168.122.120.balance type is set to activebackup.link_watch specifies how to monitor the link status.(Libteam lib uses ethtool to watch for link state changes).This is the default if no other link-watcher is specified.

create files for eth3 and eth4 respectively:

TEAM_MASTER=team0
TEAM_PORT_CONFIG='{"prio":99}'
DEVICETYPE=TeamPort
NAME=eth3
HWADDR=52:54:00:8c:33:ba
DEVICE=eth3
ONBOOT=yes

TEAM_MASTER=team0
TEAM_PORT_CONFIG='{"prio":100}'
DEVICETYPE=TeamPort
NAME=eth4
HWADDR=52:54:00:13:de:94
DEVICE=eth4
ONBOOT=yes

You can set priority numbers to your own.Interfaces are part of team0 (interface_master)

As for bonding,deactivate and reactivate team0 and restart network service

[root@localhost network-scripts]# ifdown bond0;ifup bond0
Device 'bond0' successfully disconnected.
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/18)
[root@localhost ja]#systemctl restart network
[root@localhost ja]#systemctl restart network
team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
 link/ether 52:54:00:8c:33:ba brd ff:ff:ff:ff:ff:ff
 inet 192.168.122.120/24 brd 192.168.122.255 scope global team0
 valid_lft forever preferred_lft forever
 inet6 fe80::5054:ff:fe8c:33ba/64 scope link 
 valid_lft forever preferred_lft forever

Get team details and status:

[root@localhost network-scripts]#teamnl team0 ports
 6: eth4: up 0Mbit HD 
 5: eth3: up 0Mbit HD 

[root@localhost network-scripts]# teamdctl team0 state
setup:
 runner: activebackup
ports:
 eth3
 link watches:
 link summary: up
 instance[link_watch_0]:
 name: ethtool
 link: up
 eth4
 link watches:
 link summary: up
 instance[link_watch_0]:
 name: ethtool
 link: up
runner:
 active port: eth4



You can use nmcli tool to create teaming,add team0 interface with ip address of 192.168.122.120 with 192.168.122.1 gateway

[root@s2 network-scripts]#nmcli con add type team con-name team0 ifname team0 ip4 192.168.122.120/24 gw4 192.168.122.1

This command has added a  device named team0 and created ifcfg-team0 file in the /etc/sysconfig/network-scripts folder
Add eth3 and eth4 interfaces as slaves to the team0:

[root@localhost network-scripts]# nmcli con add type team-slave con-name eth3 ifname eth3 master team0
[root@localhost network-scripts]#nmcli con add type team-slave con-name eth4 ifname eth4 master team0

This command added eth3 and eth4 interfaces as slaves to team0, and  files  ifcfg-eth3 and ifcfg-eth4 in the /etc/sysconfig/network-scripts folder has been created

Advertisements
Comments
  1. Walter says:

    I have two servers: Server 1 has 2x 10Gbit ethernet connections; Server 2 has 2x 10Gbit ethernet connections. Both servers are running CentOS Linux release 7.2.1511 3.10.0-327.18.2.el7.x86_64 with Intel X710 10 GbR latest version 17.0.12 on Dell PowerEdge R730 server.

    Teaming LACP is UP & RUNNING but when I perform network throughput tests, ilts showing only 9Gb speed.

    Switch is Dell 10Gb compatible, The configuration on the switch seems ok.

    What is the bottleneck? Someone can help me?

    teamd-1.17-6.el7_2.x86_64

    # nmcli con show
    NAME UUID TYPE DEVICE
    em1 e9757f4f-c0b4-4bbc-bd4e-b66103553000 802-3-ethernet em1
    p5p2 5993e656-31cc-197d-8359-a7d520292c34 802-3-ethernet p5p2
    p5p1 ae980826-1d4c-660f-07b7-c4ec1025b41b 802-3-ethernet p5p1
    team0 702de3eb-2e80-897c-fd52-cd0494dd8123 team team0

    # teamdctl team0 state
    setup:
    runner: lacp
    ports:
    p5p1
    link watches:
    link summary: up
    instance[link_watch_0]:
    name: ethtool
    link: up
    down count: 0
    runner:
    aggregator ID: 6, Selected
    selected: yes
    state: current
    p5p2
    link watches:
    link summary: up
    instance[link_watch_0]:
    name: ethtool
    link: up
    down count: 0
    runner:
    aggregator ID: 6, Selected
    selected: yes
    state: current
    runner:
    active: yes
    fast rate: yes

    # teamdctl team0 config dump
    {
    “device”: “team0”,
    “link_watch”: {
    “name”: “ethtool”
    },
    “ports”: {
    “p5p1”: {
    “prio”: 9
    },
    “p5p2”: {
    “prio”: 10
    }
    },
    “runner”: {
    “active”: true,
    “fast_rate”: true,
    “name”: “lacp”,
    “tx_hash”: [
    “eth”,
    “ipv4”,
    “ipv6”
    ]
    }
    }

    # teamnl team0 ports
    6: p5p1: up 10000Mbit FD
    7: p5p2: up 10000Mbit FD

    # ethtool p5p2
    Settings for p5p2:
    Supported ports: [ FIBRE ]
    Supported link modes: 10000baseT/Full
    Supported pause frame use: Symmetric
    Supports auto-negotiation: No
    Advertised link modes: Not reported
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Speed: 10000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 0
    Transceiver: external
    Auto-negotiation: off
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x0000000f (15)
    drv probe link timer
    Link detected: yes
    # ethtool p5p1
    Settings for p5p1:
    Supported ports: [ FIBRE ]
    Supported link modes: 10000baseT/Full
    Supported pause frame use: Symmetric
    Supports auto-negotiation: No
    Advertised link modes: Not reported
    Advertised pause frame use: No
    Advertised auto-negotiation: No
    Speed: 10000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 0
    Transceiver: external
    Auto-negotiation: off
    Supports Wake-on: g
    Wake-on: d
    Current message level: 0x0000000f (15)
    drv probe link timer
    Link detected: yes

    # iperf3

    [SUM] 0.00-6.00 sec 6.58 GBytes 9.43 Gbits/sec 0 sender
    [SUM] 0.00-6.00 sec 6.57 GBytes 9.41 Gbits/sec receiver

    Like

  2. peng says:

    Teaming LACP is UP & RUNNING but when I perform network throughput tests, ilts showing only 9Gb speed.

    ****************
    Because the iperf test is only one session,If you want to see the results,You must connect to the server using multiple clients
    server:
    iperf3 -s -p 5201
    iperf3 -s -p 5202
    clients:
    iperf3 -c 172.28.0.3 -p 5201
    iperf3 -c 172.28.0.3 -p 5202

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s