diff options
author | Matthew Cover <werekraken@gmail.com> | 2018-11-18 08:46:00 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2018-11-19 04:05:43 +0100 |
commit | 8ebebcba559a1bfbaec7bbda64feb9870b9c58da (patch) | |
tree | 8cccb1975f0c8563935257b8abcccc2b28d66394 | |
parent | ipv6: Fix PMTU updates for UDP/raw sockets in presence of VRF (diff) | |
download | linux-8ebebcba559a1bfbaec7bbda64feb9870b9c58da.tar.xz linux-8ebebcba559a1bfbaec7bbda64feb9870b9c58da.zip |
tuntap: fix multiqueue rx
When writing packets to a descriptor associated with a combined queue, the
packets should end up on that queue.
Before this change all packets written to any descriptor associated with a
tap interface end up on rx-0, even when the descriptor is associated with a
different queue.
The rx traffic can be generated by either of the following.
1. a simple tap program which spins up multiple queues and writes packets
to each of the file descriptors
2. tx from a qemu vm with a tap multiqueue netdev
The queue for rx traffic can be observed by either of the following (done
on the hypervisor in the qemu case).
1. a simple netmap program which opens and reads from per-queue
descriptors
2. configuring RPS and doing per-cpu captures with rxtxcpu
Alternatively, if you printk() the return value of skb_get_rx_queue() just
before each instance of netif_receive_skb() in tun.c, you will get 65535
for every skb.
Calling skb_record_rx_queue() to set the rx queue to the queue_index fixes
the association between descriptor and rx queue.
Signed-off-by: Matthew Cover <matthew.cover@stackpath.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r-- | drivers/net/tun.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 060135ceaf0e..e244f5d7512a 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1536,6 +1536,7 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile, if (!rx_batched || (!more && skb_queue_empty(queue))) { local_bh_disable(); + skb_record_rx_queue(skb, tfile->queue_index); netif_receive_skb(skb); local_bh_enable(); return; @@ -1555,8 +1556,11 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile, struct sk_buff *nskb; local_bh_disable(); - while ((nskb = __skb_dequeue(&process_queue))) + while ((nskb = __skb_dequeue(&process_queue))) { + skb_record_rx_queue(nskb, tfile->queue_index); netif_receive_skb(nskb); + } + skb_record_rx_queue(skb, tfile->queue_index); netif_receive_skb(skb); local_bh_enable(); } @@ -2451,6 +2455,7 @@ build: if (!rcu_dereference(tun->steering_prog)) rxhash = __skb_get_hash_symmetric(skb); + skb_record_rx_queue(skb, tfile->queue_index); netif_receive_skb(skb); stats = get_cpu_ptr(tun->pcpu_stats); |