diff options
author | Steven Rostedt (Google) <rostedt@goodmis.org> | 2023-12-19 19:54:25 +0100 |
---|---|---|
committer | Steven Rostedt (Google) <rostedt@goodmis.org> | 2023-12-21 17:02:01 +0100 |
commit | 353cc219372935805218e05a7754a059ab104641 (patch) | |
tree | 37845d8906b29e9fd81e251b1bf8c6946d5de457 /kernel/trace/ring_buffer.c | |
parent | tracing: Stop the tracing while changing the ring buffer subbuf size (diff) | |
download | linux-353cc219372935805218e05a7754a059ab104641.tar.xz linux-353cc219372935805218e05a7754a059ab104641.zip |
ring-buffer: Keep the same size when updating the order
The function ring_buffer_subbuf_order_set() just updated the sub-buffers
to the new size, but this also changes the size of the buffer in doing so.
As the size is determined by nr_pages * subbuf_size. If the subbuf_size is
increased without decreasing the nr_pages, this causes the total size of
the buffer to increase.
This broke the latency tracers as the snapshot needs to be the same size
as the main buffer. The size of the snapshot buffer is only expanded when
needed, and because the order is still the same, the size becomes out of
sync with the main buffer, as the main buffer increased in size without
the tracing system knowing.
Calculate the nr_pages to allocate with the new subbuf_size to be
buffer_size / new_subbuf_size.
Link: https://lore.kernel.org/linux-trace-kernel/20231219185630.649397785@goodmis.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tzvetomir Stoyanov <tz.stoyanov@gmail.com>
Cc: Vincent Donnefort <vdonnefort@google.com>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Fixes: f9b94daa542a ("ring-buffer: Set new size of the ring buffer sub page")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Diffstat (limited to 'kernel/trace/ring_buffer.c')
-rw-r--r-- | kernel/trace/ring_buffer.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index fdcd171b09b5..23ead7602da0 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -5897,7 +5897,10 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) if (!cpumask_test_cpu(cpu, buffer->cpumask)) continue; - nr_pages = buffer->buffers[cpu]->nr_pages; + /* Update the number of pages to match the new size */ + nr_pages = old_size * buffer->buffers[cpu]->nr_pages; + nr_pages = DIV_ROUND_UP(nr_pages, buffer->subbuf_size); + cpu_buffers[cpu] = rb_allocate_cpu_buffer(buffer, nr_pages, cpu); if (!cpu_buffers[cpu]) { err = -ENOMEM; |