summaryrefslogtreecommitdiffstats
path: root/kernel/events
diff options
context:
space:
mode:
authorSebastian Andrzej Siewior <bigeasy@linutronix.de>2014-08-04 15:31:08 +0200
committerIngo Molnar <mingo@kernel.org>2014-08-13 07:51:11 +0200
commite708d7ad80737496870fd0b6794704d063fb0cdc (patch)
treedf4cd87b90efc33e2162c790d07dc4df42faabfd /kernel/events
parentperf/x86: Use extended offcore mask on Haswell (diff)
downloadlinux-e708d7ad80737496870fd0b6794704d063fb0cdc.tar.xz
linux-e708d7ad80737496870fd0b6794704d063fb0cdc.zip
perf: Do poll_wait() before checking condition in perf_poll()
One should first enqueue to the waitqueue and then check for the condition. If the condition gets true after mutex_unlock() but before poll_wait() then we lose it and would have wait for another wakeup. This has been like this since v2.6.31-rc1 commit c7138f37f9 ("perf_counter: fix perf_poll()"). Before that it was slightly worse. I guess we get enough wakeups so if we miss here one it doesn't really matter. It is still a bad example. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1407159068-1478-1-git-send-email-bigeasy@linutronix.de Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/events')
-rw-r--r--kernel/events/core.c4
1 files changed, 1 insertions, 3 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c
index a25460559b4f..2d7363adf678 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3629,6 +3629,7 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
struct ring_buffer *rb;
unsigned int events = POLL_HUP;
+ poll_wait(file, &event->waitq, wait);
/*
* Pin the event->rb by taking event->mmap_mutex; otherwise
* perf_event_set_output() can swizzle our rb and make us miss wakeups.
@@ -3638,9 +3639,6 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
if (rb)
events = atomic_xchg(&rb->poll, 0);
mutex_unlock(&event->mmap_mutex);
-
- poll_wait(file, &event->waitq, wait);
-
return events;
}