diff options
author | Steven Rostedt <srostedt@redhat.com> | 2009-02-19 00:33:57 +0100 |
---|---|---|
committer | Steven Rostedt <srostedt@redhat.com> | 2009-02-19 04:04:01 +0100 |
commit | 0c5119c1e655e0719a69601b1049acdd5ec1c125 (patch) | |
tree | e808e36e274afc7c6521f69194e6fc2597e189bd /kernel/trace | |
parent | tracing/function-graph-tracer: trace the idle tasks (diff) | |
download | linux-0c5119c1e655e0719a69601b1049acdd5ec1c125.tar.xz linux-0c5119c1e655e0719a69601b1049acdd5ec1c125.zip |
tracing: disable tracing while testing ring buffer
Impact: fix to prevent hard lockup on self tests
If one of the tracers are broken and is constantly filling the ring
buffer while the test of the ring buffer is running, it will hang
the box. The reason is that the test is a consumer that will not
stop till the ring buffer is empty. But if the tracer is broken and
is constantly producing input to the buffer, this test will never
end. The result is a lockup of the box.
This happened when KALLSYMS was not defined and the dynamic ftrace
test constantly filled the ring buffer, because the filter failed
and all functions were being traced. Something was being called
that constantly filled the buffer.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Diffstat (limited to 'kernel/trace')
-rw-r--r-- | kernel/trace/trace_selftest.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 88c8eb70f54a..a7e0ef662f9f 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -57,11 +57,20 @@ static int trace_test_buffer(struct trace_array *tr, unsigned long *count) cnt = ring_buffer_entries(tr->buffer); + /* + * The trace_test_buffer_cpu runs a while loop to consume all data. + * If the calling tracer is broken, and is constantly filling + * the buffer, this will run forever, and hard lock the box. + * We disable the ring buffer while we do this test to prevent + * a hard lock up. + */ + tracing_off(); for_each_possible_cpu(cpu) { ret = trace_test_buffer_cpu(tr, cpu); if (ret) break; } + tracing_on(); __raw_spin_unlock(&ftrace_max_lock); local_irq_restore(flags); |