diff options
author | Peter Zijlstra <peterz@infradead.org> | 2017-05-24 08:52:02 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-05-24 09:10:00 +0200 |
commit | 45aea321678856687927c53972321ebfab77759a (patch) | |
tree | 7e82415526d67107739c41076bc329e63d096c44 /kernel/sched/clock.c | |
parent | x86/tsc: Fold set_cyc2ns_scale() into caller (diff) | |
download | linux-45aea321678856687927c53972321ebfab77759a.tar.xz linux-45aea321678856687927c53972321ebfab77759a.zip |
sched/clock: Fix early boot preempt assumption in __set_sched_clock_stable()
The more strict early boot preemption warnings found that
__set_sched_clock_stable() was incorrectly assuming we'd still be
running on a single CPU:
BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
caller is debug_smp_processor_id+0x1c/0x1e
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.12.0-rc2-00108-g1c3c5ea #1
Call Trace:
dump_stack+0x110/0x192
check_preemption_disabled+0x10c/0x128
? set_debug_rodata+0x25/0x25
debug_smp_processor_id+0x1c/0x1e
sched_clock_init_late+0x27/0x87
[...]
Fix it by disabling IRQs.
Reported-by: kernel test robot <xiaolong.ye@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: lkp@01.org
Cc: tipbuild@zytor.com
Link: http://lkml.kernel.org/r/20170524065202.v25vyu7pvba5mhpd@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/clock.c')
-rw-r--r-- | kernel/sched/clock.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c index 1a0d389d2f2b..ca0f8fc945c6 100644 --- a/kernel/sched/clock.c +++ b/kernel/sched/clock.c @@ -133,12 +133,19 @@ static void __scd_stamp(struct sched_clock_data *scd) static void __set_sched_clock_stable(void) { - struct sched_clock_data *scd = this_scd(); + struct sched_clock_data *scd; /* + * Since we're still unstable and the tick is already running, we have + * to disable IRQs in order to get a consistent scd->tick* reading. + */ + local_irq_disable(); + scd = this_scd(); + /* * Attempt to make the (initial) unstable->stable transition continuous. */ __sched_clock_offset = (scd->tick_gtod + __gtod_offset) - (scd->tick_raw); + local_irq_enable(); printk(KERN_INFO "sched_clock: Marking stable (%lld, %lld)->(%lld, %lld)\n", scd->tick_gtod, __gtod_offset, |