diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2009-12-17 13:16:31 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-12-17 13:22:46 +0100 |
commit | 077614ee1e93245a3b9a4e1213659405dbeb0ba6 (patch) | |
tree | 246e441967d7973d9e3addc6bade207db86d2575 /kernel/sched.c | |
parent | sched: Assert task state bits at build time (diff) | |
download | linux-077614ee1e93245a3b9a4e1213659405dbeb0ba6.tar.xz linux-077614ee1e93245a3b9a4e1213659405dbeb0ba6.zip |
sched: Fix broken assertion
There's a preemption race in the set_task_cpu() debug check in
that when we get preempted after setting task->state we'd still
be on the rq proper, but fail the test.
Check for preempted tasks, since those are always on the RQ.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20091217121830.137155561@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 7be88a7be047..720df108a2d6 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2041,7 +2041,8 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu) * We should never call set_task_cpu() on a blocked task, * ttwu() will sort out the placement. */ - WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING); + WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING && + !(task_thread_info(p)->preempt_count & PREEMPT_ACTIVE)); #endif trace_sched_migrate_task(p, new_cpu); |