Introduction
Bonding (or channel bonding) is a technology enabled by the Linux kernal and Red Hat Enterprise Linux, that allows administrators to combine two or more network interfaces to form a single, logical "bonded" interface for redundancy or increased throughput. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, they may provide link-integrity monitoring.
This article describes the configuration methods of bonding on Red Hat Enterprise Linux 3, Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5.
Configuring bonded devices on Red Hat Enterprise Linux 5
Single bonded device on RHEL5
To configure the bond0 device with the network interface eth0 and eth1, perform the following steps:
1. Add the following line to /etc/modprobe.conf:
alias bond0 bonding
2. Create the channel bonding interface file ifcfg-bond0 in the /etc/sysconfig/network-scripts/ directory:
# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.50.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=0 miimon=100"
Note:
3. Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-eth0. Both eth0 and eth1 should look like the following example:
DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
Note:
4. Restart the network service:
# service network restart
5. In order to check the bonding status, check the following file:
# cat /proc/net/bonding/bond0
Multiple bonded device on RHEL5
In Red Hat Enterprise Linux 5.3 (or update to initscripts-8.45.25-1.el5) and later, configuring multiple bonding channels is similar to configuring a single bonding channel. Setup the ifcfg-bond<N> and ifcfg-eth<X> files as if there were only one bonding channel. You can specify different BONDING_OPTS for different bonding channels so that they can have different modes and other settings. Refer to the section 15.2.3. Channel Bonding Interfaces in the Red Hat Enterprise Linux 5 Deployment Guide for more information.
To configure the bond0 device with the ethernet interface eth0 and eth1, and configure the bond1 device with the Ethernet interface eth2 and eth3, perform the following steps:
1. Add the following line to /etc/modprobe.conf:
alias bond0 bonding
alias bond1 bonding
2. Create the channel bonding interface files ifcfg-bond0 and ifcfg-bond1, in the /etc/sysconfig/network-scripts/ directory:
# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.50.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=0 miimon=100"
# cat /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
IPADDR=192.168.30.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=1 miimon=50"
Note: there are different bonding modes for bond0 and bond1. For the bond0 device, it is the balance-rr policy (mode=0). For the bond1 device, it is the fail_over_mac policy (mode=1). More information about the bonding modes please refer to The bonding modes supported in Red Hat Enterprise Linux
3. Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-eth0. Both eth0 and eth1 should look like the following example:
DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
Note:
4. Restart the network service:
# service network restart
5. In order to check the bonding status, check the following file:
# cat /proc/net/bonding/bond0
Configuring bonded devices on Red Hat Enterprise Linux 4
Single bonded device on RHEL4
For a detailed manual for bonding configuration on RHEL4 , please refer to,
To configure the bond0 device with the network interface eth0 and eth1, perform the following steps,
1. Add the following line to /etc/modprobe.conf,
alias bond0 bonding
options bonding mode=1 miimon=100
Note:
2. Create the channel bonding interface file in the /etc/sysconfig/network-scripts/ directory, ifcfg-bond0
# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.50.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
3. Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-eth<N>. In this example, both eth0 and eth1 should look like this:
DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
Note:
Multiple bonded device on RHEL4
To configure multiple bonding channels on RHEL4, first set up the ifcfg-bond<N> and ifcfg-eth<X> files as you would for a single bonding channel, shown in the previous section.
Configuring multiple channels requires a different setup for /etc/modprobe.conf. If the two bonding channels have the same bonding options, such as bonding mode, monitoring frequency and so on, add the b option. For example:
alias bond0 bonding
alias bond1 bonding
options bonding max_bonds=2 mode=balance-rr miimon=100
If the two bonding channels have different bonding options (for example, one is using round-robin mode and one is using active-backup mode), the bonding modules have to load twice with different options. For example, in /etc/modprobe.conf:
install bond0 /sbin/modprobe --ignore-install bonding -o bonding0 mode=0 miimon=100 primary=eth0
install bond1 /sbin/modprobe --ignore-install bonding -o bonding1 mode=1 miimon=50 primary=eth2
If there are more bonding channels, add one install bond<N> /sbin/modprobe --ignore-install bonding -o bonding<N> options line per bonding channel.
Note: The use of -o bondingX to get different options for multiple bonds was not possible in Red Hat Enterprise Linux 4 GA and 4 Update 1.
After the file /etc/modprobe.conf is modified, restart the network service:
# service network restart
Configuring bonded devices on Red Hat Enterprise Linux 3
Single bonded device on RHEL3
To configure the bond0 device with the network interfaces eth0 and eth1, perform the following steps:
1. Add the following lines to /etc/modules.conf:
alias bond<N> bonding
options bonding mode=1 miimon=100
Note: Replace the <N> with the numerical value for the interface, such as 0 in this example.
2. Create the channel bonding interface file ifcfg-bond0 in the /etc/sysconfig/network-scripts/ directory:
# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETMASK=255.255.255.0
IPADDR=192.168.122.225
USERCTL=no
3. Configure the ethernet interface /etc/sysconfig/network-scripts/ifcfg-eth<N>. Both eth0 and eth1 should look like the following example:
# cat /etc/sysconfig/network-scripts/ifcfg-eth<N>
DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
Note:
4. Restart the network service:
# service network restart
Multiple bonded device on RHEL3
To configure multiple bonding channels on RHEL3, first set up the ifcfg-bond<N> and ifcfg-eth<X> files as you would for a single bonding channel, shown in the previous section.
Configuring multiple channels requires a different setup for /etc/modprobe.conf. If the two bonding channels have different bonding options (for example, one is using round-robin mode and one is using active-backup mode), the bonding modules have to load twice with different options. For example, in /etc/modules.conf:
alias bond0 bonding
options bond0 -o bond0 mode=1 miimon=100
alias bond1 bonding
options bond1 -o bond1 mode=0 miimon=50
After the file /etc/modules.conf is modified, restart the network service:
# service network restart
Configuring bonded devices on Red Hat Enterprise Linux 2.1
Red Hat does not provide the bonding support on Red Hat Enterprise Linux 2.1. Red Hat Enterprise Linux 2.1 is out of the Red Hat Enterprise Linux lifecycle. For more information refer to
Bonding modes on Red Hat Enterprise Linux
For information on the bonding modes supported in Red Hat Enterprise Linux, please refer to the kernel document /usr/share/doc/kernel-doc-{version}/Documentation/networking/bonding.txt
To read the kernel document, you will need to install the RPM package kernel-doc-version.rpm
To read the kernel document, you will need to install the RPM package kernel-doc-version.rpm
Red Hat Enterprise Linux 5
Balance-rr (mode 0)
Round-robin policy: transmits packets in sequential order from the first available slave through the last.
Active-backup (mode 1)
Active-backup policy: only one slave in the bond is active. A different slave becomes active only if the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.
In bonding version 2.6.2 or later, when a failover occurs in active-backup mode, bonding will issue one or more gratuitous ARPs on the newly active slave. One gratuitous ARP is issued for the bonding master interface and each VLAN interface configured above it, assuming that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN interfaces are tagged with the appropriate VLAN id.
Balance-xor (mode 2)
XOR policy: transmits based on the selected transmit hash policy. The default policy is a simple ((source MAC address XORd with destination MAC address) modulo slave count) . Alternate transmit policies may be selected via the xmit_hash_policy option, described below.
Broadcast (mode 3)
Broadcast policy: transmits everything on all slave interfaces.
802.3ad (mode 4)
IEEE 802.3ad dynamic link aggregation: this mode creates aggregation groups that share the same speed and duplex settings, and uses all slaves in the active aggregator according to the 802.3ad specification. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR policy via the xmit_hash_policy option, documented below. Note that not all transmit policies may be 802.3ad compliant, particularly with regard to the packet misordering requirements described in section 43.2.4 of the 802.3ad standard. Differing peer implementations will have varying tolerances for noncompliance.
Prerequisites:
Balance-tlb (mode 5)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Prerequisite:
Balance-alb (mode 6)
Adaptive load balancing: includes balance-tlb and receive load balancing (rlb) for IPv4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond, such that different peers use different hardware addresses for the server.
Receive traffic from connections created by the server is also balanced. When the local system sends an ARP request the bonding driver copies and saves the peer's IP information from the ARP packet. When the ARP reply arrives from the peer, its hardware address is retrieved and the bonding driver initiates an ARP reply to this peer assigning it to one of the slaves in the bond. A problematic outcome of using ARP negotiation for balancing is that each time that an ARP request is broadcast it uses the hardware address of the bond. Hence, peers learn the hardware address of the bond and the balancing of receive traffic collapses to the current slave. This is handled by sending updates (ARP Replies) to all the peers with their individually assigned hardware address such that the traffic is redistributed. Receive traffic is also redistributed when a new slave is added to the bond and when an inactive slave is reactivated. The receive load is distributed sequentially (round-robin) among the group of highest-speed slaves in the bond.
When a link is reconnected or a new slave joins the bond the receive traffic is redistributed among all active slaves in the bond by initiating ARP Replies with the selected MAC address to each of the clients. The updelay parameter (detailed below) must be set to a value equal or greater than the switch's forwarding delay so that the ARP Replies sent to the peers will not be blocked by the switch.
Prerequisites:
Red Hat Enterprise Linux 4
The bonding modes supported on RHEL4 is the same with the bonding modes on RHEL5. The detail please refer to the kernel documents, /usr/share/doc/kernel-doc-2.6.9/Documentation/networking/bonding.txt
Red Hat Enterprise Linux 3
Balance-rr (Mode 0)
Round-robin policy: transmits packets in sequential order from the first available slave through the last.
Active-backup (Mode 1)
Active-backup policy: only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch.
In bonding version 2.6.2 or later, when a failover occurs in active-backup mode, bonding will issue one or more gratuitous ARPs on the newly active slave. One gratuitous ARP is issued for the bonding master interface and each VLAN interfaces configured above it, provided that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN interfaces are tagged with the appropriate VLAN id.
Balance-xor (Mode 2)
XOR policy: transmits based on the selected transmit hash policy. The default policy is a simple ((source MAC address XORd with destination MAC address) modulo slave count) . Alternate transmit policies may be selected via the xmit_hash_policy option, described below.
Bonding parameters in general
It is critical that either the miimon or arp_interval and arp_ip_target parameters be specified, otherwise serious network degradation will occur during link failures. Very few devices do not support at least miimon, so it should always be used.
General bonding parameters
max_bonds: specifies the number of bonding devices to create for this instance of the bonding driver. For example, if max_bonds is 3, and the bonding driver is not already loaded, then bond0, bond1 and bond2 will be created. The default value is 1.
ARP monitoring parameters
For the active slave, the validation checks ARP replies to confirm that they were generated by an arp_ip_target. Since backup slaves do not typically receive these replies, the validation performed for backup slaves is on the ARP request sent out via the active slave. It is possible that some switch or network configurations may result in situations wherein the backup slaves do not receive the ARP requests; in such a situation, validation of backup slaves must be disabled.
No comments:
Post a Comment