diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2019-09-10 19:09:14 +0200 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-09-10 19:09:14 +0200 |
commit | 32d1d15c52c17a7f0368e6a7084796e16ec481d3 (patch) | |
tree | 1ef7e045c610279617003d750501224190f12a30 /kernel/trace/trace_functions_graph.c | |
parent | Merge tag 'kvm-ppc-next-5.4-1' of git://git.kernel.org/pub/scm/linux/kernel/g... (diff) | |
parent | KVM: arm/arm64: vgic: Allow more than 256 vcpus for KVM_IRQ_LINE (diff) | |
download | linux-32d1d15c52c17a7f0368e6a7084796e16ec481d3.tar.xz linux-32d1d15c52c17a7f0368e6a7084796e16ec481d3.zip |
Merge tag 'kvmarm-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm updates for 5.4
- New ITS translation cache
- Allow up to 512 CPUs to be supported with GICv3 (for real this time)
- Now call kvm_arch_vcpu_blocking early in the blocking sequence
- Tidy-up device mappings in S2 when DIC is available
- Clean icache invalidation on VMID rollover
- General cleanup
Diffstat (limited to 'kernel/trace/trace_functions_graph.c')
-rw-r--r-- | kernel/trace/trace_functions_graph.c | 17 |
1 files changed, 7 insertions, 10 deletions
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 69ebf3c2f1b5..78af97163147 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -137,6 +137,13 @@ int trace_graph_entry(struct ftrace_graph_ent *trace) if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) return 0; + /* + * Do not trace a function if it's filtered by set_graph_notrace. + * Make the index of ret stack negative to indicate that it should + * ignore further functions. But it needs its own ret stack entry + * to recover the original index in order to continue tracing after + * returning from the function. + */ if (ftrace_graph_notrace_addr(trace->func)) { trace_recursion_set(TRACE_GRAPH_NOTRACE_BIT); /* @@ -156,16 +163,6 @@ int trace_graph_entry(struct ftrace_graph_ent *trace) return 0; /* - * Do not trace a function if it's filtered by set_graph_notrace. - * Make the index of ret stack negative to indicate that it should - * ignore further functions. But it needs its own ret stack entry - * to recover the original index in order to continue tracing after - * returning from the function. - */ - if (ftrace_graph_notrace_addr(trace->func)) - return 1; - - /* * Stop here if tracing_threshold is set. We only write function return * events to the ring buffer. */ |