diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2009-01-14 15:36:26 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-01-14 18:09:00 +0100 |
commit | 41719b03091911028116155deddc5eedf8c45e37 (patch) | |
tree | 20a699807d78bc0af86b19443dc751415c0cc6f7 /include | |
parent | mutex: small cleanup (diff) | |
download | linux-41719b03091911028116155deddc5eedf8c45e37.tar.xz linux-41719b03091911028116155deddc5eedf8c45e37.zip |
mutex: preemption fixes
The problem is that dropping the spinlock right before schedule is a voluntary
preemption point and can cause a schedule, right after which we schedule again.
Fix this inefficiency by keeping preemption disabled until we schedule, do this
by explicity disabling preemption and providing a schedule() variant that
assumes preemption is already disabled.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/sched.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 4cae9b81a1f8..9f0b372cfa6f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -328,6 +328,7 @@ extern signed long schedule_timeout(signed long timeout); extern signed long schedule_timeout_interruptible(signed long timeout); extern signed long schedule_timeout_killable(signed long timeout); extern signed long schedule_timeout_uninterruptible(signed long timeout); +asmlinkage void __schedule(void); asmlinkage void schedule(void); struct nsproxy; |