summaryrefslogtreecommitdiffstats
path: root/kernel/trace/bpf_trace.c
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@fb.com>2016-03-08 06:57:13 +0100
committerDavid S. Miller <davem@davemloft.net>2016-03-08 21:28:30 +0100
commitb121d1e74d1f24654bdc3165d3db1ca149501356 (patch)
treeaa0326edc95e2152a2277386b5363beb7768f7dc /kernel/trace/bpf_trace.c
parentMerge branch 'ipv6-per-netns-gc' (diff)
downloadlinux-b121d1e74d1f24654bdc3165d3db1ca149501356.tar.xz
linux-b121d1e74d1f24654bdc3165d3db1ca149501356.zip
bpf: prevent kprobe+bpf deadlocks
if kprobe is placed within update or delete hash map helpers that hold bucket spin lock and triggered bpf program is trying to grab the spinlock for the same bucket on the same cpu, it will deadlock. Fix it by extending existing recursion prevention mechanism. Note, map_lookup and other tracing helpers don't have this problem, since they don't hold any locks and don't modify global data. bpf_trace_printk has its own recursive check and ok as well. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel/trace/bpf_trace.c')
-rw-r--r--kernel/trace/bpf_trace.c2
1 files changed, 0 insertions, 2 deletions
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 4b8caa392b86..3e4ffb3ace5f 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -13,8 +13,6 @@
#include <linux/ctype.h>
#include "trace.h"
-static DEFINE_PER_CPU(int, bpf_prog_active);
-
/**
* trace_call_bpf - invoke BPF program
* @prog: BPF program