summaryrefslogtreecommitdiffstats
path: root/fs/eventpoll.c
diff options
context:
space:
mode:
authorDavidlohr Bueso <dave@stgolabs.net>2019-01-04 00:27:09 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2019-01-04 22:13:46 +0100
commit76699a67f3041ff4c7af6d6ee9be2bfbf1ffb671 (patch)
tree5d42b50be630e856b4e91c3109b56d8e4612a594 /fs/eventpoll.c
parentfs/epoll: simplify ep_send_events_proc() ready-list loop (diff)
downloadlinux-76699a67f3041ff4c7af6d6ee9be2bfbf1ffb671.tar.xz
linux-76699a67f3041ff4c7af6d6ee9be2bfbf1ffb671.zip
fs/epoll: drop ovflist branch prediction
The ep->ovflist is a secondary ready-list to temporarily store events that might occur when doing sproc without holding the ep->wq.lock. This accounts for every time we check for ready events and also send events back to userspace; both callbacks, particularly the latter because of copy_to_user, can account for a non-trivial time. As such, the unlikely() check to see if the pointer is being used, seems both misleading and sub-optimal. In fact, we go to an awful lot of trouble to sync both lists, and populating the ovflist is far from an uncommon scenario. For example, profiling a concurrent epoll_wait(2) benchmark, with CONFIG_PROFILE_ANNOTATED_BRANCHES shows that for a two threads a 33% incorrect rate was seen; and when incrementally increasing the number of epoll instances (which is used, for example for multiple queuing load balancing models), up to a 90% incorrect rate was seen. Similarly, by deleting the prediction, 3% throughput boost was seen across incremental threads. Link: http://lkml.kernel.org/r/20181108051006.18751-4-dave@stgolabs.net Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jason Baron <jbaron@akamai.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r--fs/eventpoll.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index aaf614ee08e4..fb7f05f4099d 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -1153,7 +1153,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
* semantics). All the events that happen during that period of time are
* chained in ep->ovflist and requeued later on.
*/
- if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
+ if (ep->ovflist != EP_UNACTIVE_PTR) {
if (epi->next == EP_UNACTIVE_PTR) {
epi->next = ep->ovflist;
ep->ovflist = epi;