diff options
author | Adrian Hunter <adrian.hunter@intel.com> | 2020-04-29 17:07:43 +0200 |
---|---|---|
committer | Arnaldo Carvalho de Melo <acme@redhat.com> | 2020-05-05 21:35:29 +0200 |
commit | 86d67180b920d178ae1c2923f50a0759d6ce1a10 (patch) | |
tree | 79534f81a6c90f50bf7085b0ab3f27fc7d8582f3 /tools/perf/util/thread-stack.h | |
parent | perf tools: Simplify checking if SMT is active. (diff) | |
download | linux-86d67180b920d178ae1c2923f50a0759d6ce1a10.tar.xz linux-86d67180b920d178ae1c2923f50a0759d6ce1a10.zip |
perf thread-stack: Add branch stack support
Intel PT already has support for creating branch stacks for each context
(per-cpu or per-thread). In the more common per-cpu case, the branch stack
is not separated for different threads, instead being cleared in between
each sample.
That approach will not work very well for adding branch stacks to
regular events. The branch stacks really need to be accumulated
separately for each thread.
As a start to accomplishing that, this patch adds support for putting
branch stack support into the thread-stack. The advantages are:
1. the branches are accumulated separately for each thread
2. the branch stack is cleared only in between continuous traces
This helps pave the way for adding branch stacks to regular events, not
just synthesized events as at present.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200429150751.12570-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Diffstat (limited to 'tools/perf/util/thread-stack.h')
-rw-r--r-- | tools/perf/util/thread-stack.h | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/tools/perf/util/thread-stack.h b/tools/perf/util/thread-stack.h index 8962ddc4e1ab..c279a0cbbdd0 100644 --- a/tools/perf/util/thread-stack.h +++ b/tools/perf/util/thread-stack.h @@ -81,13 +81,16 @@ struct call_return_processor { }; int thread_stack__event(struct thread *thread, int cpu, u32 flags, u64 from_ip, - u64 to_ip, u16 insn_len, u64 trace_nr); + u64 to_ip, u16 insn_len, u64 trace_nr, bool callstack, + unsigned int br_stack_sz, bool mispred_all); void thread_stack__set_trace_nr(struct thread *thread, int cpu, u64 trace_nr); void thread_stack__sample(struct thread *thread, int cpu, struct ip_callchain *chain, size_t sz, u64 ip, u64 kernel_start); void thread_stack__sample_late(struct thread *thread, int cpu, struct ip_callchain *chain, size_t sz, u64 ip, u64 kernel_start); +void thread_stack__br_sample(struct thread *thread, int cpu, + struct branch_stack *dst, unsigned int sz); int thread_stack__flush(struct thread *thread); void thread_stack__free(struct thread *thread); size_t thread_stack__depth(struct thread *thread, int cpu); |