diff options
author | Frederic Weisbecker <fweisbec@gmail.com> | 2013-02-24 12:59:30 +0100 |
---|---|---|
committer | Frederic Weisbecker <fweisbec@gmail.com> | 2013-03-07 17:10:21 +0100 |
commit | b22366cd54c6fe05db426f20adb10f461c19ec06 (patch) | |
tree | 3eb42a6ae0c6b25c27d946e6ffa4787f82ac952f /kernel | |
parent | context_tracking: Restore correct previous context state on exception exit (diff) | |
download | linux-b22366cd54c6fe05db426f20adb10f461c19ec06.tar.xz linux-b22366cd54c6fe05db426f20adb10f461c19ec06.zip |
context_tracking: Restore preempted context state after preempt_schedule_irq()
From the context tracking POV, preempt_schedule_irq() behaves pretty much
like an exception: It can be called anytime and schedule another task.
But currently it doesn't restore the context tracking state of the preempted
code on preempt_schedule_irq() return.
As a result, if preempt_schedule_irq() is called in the tiny frame between
user_enter() and the actual return to userspace, we resume userspace with
the wrong context tracking state.
Fix this by using exception_enter/exit() which are a perfect fit for this
kind of issue.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Li Zhong <zhong@linux.vnet.ibm.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Mats Liljegren <mats.liljegren@enea.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/core.c | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7f12624a393c..af7a8c84b797 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3082,11 +3082,13 @@ EXPORT_SYMBOL(preempt_schedule); asmlinkage void __sched preempt_schedule_irq(void) { struct thread_info *ti = current_thread_info(); + enum ctx_state prev_state; /* Catch callers which need to be fixed */ BUG_ON(ti->preempt_count || !irqs_disabled()); - user_exit(); + prev_state = exception_enter(); + do { add_preempt_count(PREEMPT_ACTIVE); local_irq_enable(); @@ -3100,6 +3102,8 @@ asmlinkage void __sched preempt_schedule_irq(void) */ barrier(); } while (need_resched()); + + exception_exit(prev_state); } #endif /* CONFIG_PREEMPT */ |