summaryrefslogtreecommitdiffstats
path: root/tools
diff options
context:
space:
mode:
authorToshiaki Makita <toshiaki.makita1@gmail.com>2019-06-13 11:39:59 +0200
committerDaniel Borkmann <daniel@iogearbox.net>2019-06-25 14:26:54 +0200
commit9cda7807ee1e25a3771b5357d9fb12991b2550f9 (patch)
tree747233a818a6d631686f424c52f5838dd3909655 /tools
parentxdp: Add tracepoint for bulk XDP_TX (diff)
downloadlinux-9cda7807ee1e25a3771b5357d9fb12991b2550f9.tar.xz
linux-9cda7807ee1e25a3771b5357d9fb12991b2550f9.zip
veth: Support bulk XDP_TX
XDP_TX is similar to XDP_REDIRECT as it essentially redirects packets to the device itself. XDP_REDIRECT has bulk transmit mechanism to avoid the heavy cost of indirect call but it also reduces lock acquisition on the destination device that needs locks like veth and tun. XDP_TX does not use indirect calls but drivers which require locks can benefit from the bulk transmit for XDP_TX as well. This patch introduces bulk transmit mechanism in veth using bulk queue on stack, and improves XDP_TX performance by about 9%. Here are single-core/single-flow XDP_TX test results. CPU consumptions are taken from "perf report --no-child". - Before: 7.26 Mpps _raw_spin_lock 7.83% veth_xdp_xmit 12.23% - After: 7.94 Mpps _raw_spin_lock 1.08% veth_xdp_xmit 6.10% v2: - Use stack for bulk queue instead of a global variable. Signed-off-by: Toshiaki Makita <toshiaki.makita1@gmail.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'tools')
0 files changed, 0 insertions, 0 deletions