summaryrefslogtreecommitdiffstats
path: root/kernel/sched_rt.c
diff options
context:
space:
mode:
authorGregory Haskins <ghaskins@novell.com>2008-12-29 15:39:50 +0100
committerGregory Haskins <ghaskins@novell.com>2008-12-29 15:39:50 +0100
commit74ab8e4f6412c0b2d730fe5de28dc21de8b92c01 (patch)
treec1bce6a8e23fa58677de23989fa81bc1fcfc0118 /kernel/sched_rt.c
parentsched: use highest_prio.curr for pull threshold (diff)
downloadlinux-74ab8e4f6412c0b2d730fe5de28dc21de8b92c01.tar.xz
linux-74ab8e4f6412c0b2d730fe5de28dc21de8b92c01.zip
sched: use highest_prio.next to optimize pull operations
We currently take the rq->lock for every cpu in an overload state during pull_rt_tasks(). However, we now have enough information via the highest_prio.[curr|next] fields to determine if there is any tasks of interest to warrant the overhead of the rq->lock, before we actually take it. So we use this information to reduce lock contention during the pull for the case where the source-rq doesnt have tasks that preempt the current task. Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Diffstat (limited to 'kernel/sched_rt.c')
-rw-r--r--kernel/sched_rt.c12
1 files changed, 12 insertions, 0 deletions
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index f8fb3edadcaa..d047f288c411 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -1218,6 +1218,18 @@ static int pull_rt_task(struct rq *this_rq)
continue;
src_rq = cpu_rq(cpu);
+
+ /*
+ * Don't bother taking the src_rq->lock if the next highest
+ * task is known to be lower-priority than our current task.
+ * This may look racy, but if this value is about to go
+ * logically higher, the src_rq will push this task away.
+ * And if its going logically lower, we do not care
+ */
+ if (src_rq->rt.highest_prio.next >=
+ this_rq->rt.highest_prio.curr)
+ continue;
+
/*
* We can potentially drop this_rq's lock in
* double_lock_balance, and another CPU could