diff options
author | Mahesh Bandewar <maheshb@google.com> | 2014-04-23 01:30:22 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2014-04-24 19:04:34 +0200 |
commit | e9f0fb88493570200b8dc1cc02d3e676412d25bc (patch) | |
tree | edd781d8c00772c53e0736a95cf26f0ef2e620dc /drivers/net/bonding/bond_alb.c | |
parent | bonding: Added bond_tlb_xmit() for tlb mode. (diff) | |
download | linux-e9f0fb88493570200b8dc1cc02d3e676412d25bc.tar.xz linux-e9f0fb88493570200b8dc1cc02d3e676412d25bc.zip |
bonding: Add tlb_dynamic_lb parameter for tlb mode
The aggresive load balancing causes packet re-ordering as active
flows are moved from a slave to another within the group. Sometime
this aggresive lb is not necessary if the preference is for less
re-ordering. This parameter if used with value "0" disables
this dynamic flow shuffling minimizing packet re-ordering. Of course
the side effect is that it has to live with the static load balancing
that the hashing distribution provides. This impact is less severe if
the correct xmit-hashing-policy is used for the tlb setup.
The default value of the parameter is set to "1" mimicing the earlier
behavior.
Ran the netperf test with 200 stream for 1 min between two hosts with
4x1G trunk (xmit-lb mode with xmit-policy L3+4) before and after these
changes. Following was the command used for those 200 instances -
netperf -t TCP_RR -l 60 -s 5 -H <host> -- -r81920,81920
Transactions per second:
Before change: 1,367.11
After change: 1,470.65
Change-Id: Ie3f75c77282cf602e83a6e833c6eb164e72a0990
Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers/net/bonding/bond_alb.c')
-rw-r--r-- | drivers/net/bonding/bond_alb.c | 19 |
1 files changed, 15 insertions, 4 deletions
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index 153232ed4b3f..70de039dad2e 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c @@ -1356,7 +1356,8 @@ static int bond_do_alb_xmit(struct sk_buff *skb, struct bonding *bond, if (!tx_slave) { /* unbalanced or unassigned, send through primary */ tx_slave = rcu_dereference(bond->curr_active_slave); - bond_info->unbalanced_load += skb->len; + if (bond->params.tlb_dynamic_lb) + bond_info->unbalanced_load += skb->len; } if (tx_slave && SLAVE_IS_OK(tx_slave)) { @@ -1369,7 +1370,7 @@ static int bond_do_alb_xmit(struct sk_buff *skb, struct bonding *bond, goto out; } - if (tx_slave) { + if (tx_slave && bond->params.tlb_dynamic_lb) { _lock_tx_hashtbl(bond); __tlb_clear_slave(bond, tx_slave, 0); _unlock_tx_hashtbl(bond); @@ -1399,11 +1400,21 @@ int bond_tlb_xmit(struct sk_buff *skb, struct net_device *bond_dev) /* In case of IPX, it will falback to L2 hash */ case htons(ETH_P_IPV6): hash_index = bond_xmit_hash(bond, skb); - tx_slave = tlb_choose_channel(bond, hash_index & 0xFF, skb->len); + if (bond->params.tlb_dynamic_lb) { + tx_slave = tlb_choose_channel(bond, + hash_index & 0xFF, + skb->len); + } else { + struct list_head *iter; + int idx = hash_index % bond->slave_cnt; + + bond_for_each_slave_rcu(bond, tx_slave, iter) + if (--idx < 0) + break; + } break; } } - return bond_do_alb_xmit(skb, bond, tx_slave); } |