diff options
author | Alexei Starovoitov <ast@fb.com> | 2016-04-19 05:11:50 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-04-21 19:48:20 +0200 |
commit | 85b67bcb7e4a23ced05e7020bf5843b9857f6881 (patch) | |
tree | 5cb64f3464f1beeed2f950cc3cc2f41b121de2bf /kernel/events | |
parent | net: dsa: remove tag_protocol from dsa_switch (diff) | |
download | linux-85b67bcb7e4a23ced05e7020bf5843b9857f6881.tar.xz linux-85b67bcb7e4a23ced05e7020bf5843b9857f6881.zip |
perf, bpf: minimize the size of perf_trace_() tracepoint handler
move trace_call_bpf() into helper function to minimize the size
of perf_trace_*() tracepoint handlers.
text data bss dec hex filename
10541679 5526646 2945024 19013349 1221ee5 vmlinux_before
10509422 5526646 2945024 18981092 121a0e4 vmlinux_after
It may seem that perf_fetch_caller_regs() can also be moved,
but that is incorrect, since ip/sp will be wrong.
bpf+tracepoint performance is not affected, since
perf_swevent_put_recursion_context() is now inlined.
export_symbol_gpl can also be dropped.
No measurable change in normal perf tracepoints.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel/events')
-rw-r--r-- | kernel/events/core.c | 20 |
1 files changed, 19 insertions, 1 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c index 5056abffef27..9eb23dc27462 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6741,7 +6741,6 @@ void perf_swevent_put_recursion_context(int rctx) put_recursion_context(swhash->recursion, rctx); } -EXPORT_SYMBOL_GPL(perf_swevent_put_recursion_context); void ___perf_sw_event(u32 event_id, u64 nr, struct pt_regs *regs, u64 addr) { @@ -6998,6 +6997,25 @@ static int perf_tp_event_match(struct perf_event *event, return 1; } +void perf_trace_run_bpf_submit(void *raw_data, int size, int rctx, + struct trace_event_call *call, u64 count, + struct pt_regs *regs, struct hlist_head *head, + struct task_struct *task) +{ + struct bpf_prog *prog = call->prog; + + if (prog) { + *(struct pt_regs **)raw_data = regs; + if (!trace_call_bpf(prog, raw_data) || hlist_empty(head)) { + perf_swevent_put_recursion_context(rctx); + return; + } + } + perf_tp_event(call->event.type, count, raw_data, size, regs, head, + rctx, task); +} +EXPORT_SYMBOL_GPL(perf_trace_run_bpf_submit); + void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size, struct pt_regs *regs, struct hlist_head *head, int rctx, struct task_struct *task) |