summaryrefslogtreecommitdiffstats
path: root/kernel/trace/trace.c
diff options
context:
space:
mode:
authorSteven Rostedt (VMware) <rostedt@goodmis.org>2021-02-10 17:53:22 +0100
committerSteven Rostedt (VMware) <rostedt@goodmis.org>2021-02-11 20:23:37 +0100
commitb220c049d5196dd94d992dd2dc8cba1a5e6123bf (patch)
tree33d5cd03fa6e48c31266ae3104c0f2f1bc4ca60c /kernel/trace/trace.c
parenttracing: Do not count ftrace events in top level enable output (diff)
downloadlinux-b220c049d5196dd94d992dd2dc8cba1a5e6123bf.tar.xz
linux-b220c049d5196dd94d992dd2dc8cba1a5e6123bf.zip
tracing: Check length before giving out the filter buffer
When filters are used by trace events, a page is allocated on each CPU and used to copy the trace event fields to this page before writing to the ring buffer. The reason to use the filter and not write directly into the ring buffer is because a filter may discard the event and there's more overhead on discarding from the ring buffer than the extra copy. The problem here is that there is no check against the size being allocated when using this page. If an event asks for more than a page size while being filtered, it will get only a page, leading to the caller writing more that what was allocated. Check the length of the request, and if it is more than PAGE_SIZE minus the header default back to allocating from the ring buffer directly. The ring buffer may reject the event if its too big anyway, but it wont overflow. Link: https://lore.kernel.org/ath10k/1612839593-2308-1-git-send-email-wgong@codeaurora.org/ Cc: stable@vger.kernel.org Fixes: 0fc1b09ff1ff4 ("tracing: Use temp buffer when filtering events") Reported-by: Wen Gong <wgong@codeaurora.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Diffstat (limited to '')
-rw-r--r--kernel/trace/trace.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index b8a2d786b503..b5815a022ecc 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2745,7 +2745,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
(entry = this_cpu_read(trace_buffered_event))) {
/* Try to use the per cpu buffer first */
val = this_cpu_inc_return(trace_buffered_event_cnt);
- if (val == 1) {
+ if ((len < (PAGE_SIZE - sizeof(*entry))) && val == 1) {
trace_event_setup(entry, type, flags, pc);
entry->array[0] = len;
return entry;