diff options
author | Peter Zijlstra <peterz@infradead.org> | 2018-07-20 10:09:11 +0200 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2018-07-20 11:58:00 +0200 |
commit | 9407f5a7ee77c631d1e100436132437cf6237e45 (patch) | |
tree | ba1095190c509eefa30e2fceebda6528db941d22 /kernel/sched | |
parent | x86/tsc: Make use of tsc_calibrate_cpu_early() (diff) | |
download | linux-9407f5a7ee77c631d1e100436132437cf6237e45.tar.xz linux-9407f5a7ee77c631d1e100436132437cf6237e45.zip |
sched/clock: Close a hole in sched_clock_init()
All data required for the 'unstable' sched_clock must be set-up _before_
enabling it -- setting sched_clock_running. This includes the
__gtod_offset but also a recent scd stamp.
Make the gtod-offset update also set the csd stamp -- it requires the
same two clock reads _anyway_. This doesn't hurt in the
sched_clock_tick_stable() case and ensures sched_clock_init() gets
everything set-up before use.
Also switch to unconditional IRQ-disable/enable because the static key
stuff already requires this is not ran with IRQs disabled.
Fixes: 857baa87b642 ("sched/clock: Enable sched clock early")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
Cc: steven.sistare@oracle.com
Cc: daniel.m.jordan@oracle.com
Cc: linux@armlinux.org.uk
Cc: schwidefsky@de.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: john.stultz@linaro.org
Cc: sboyd@codeaurora.org
Cc: hpa@zytor.com
Cc: douly.fnst@cn.fujitsu.com
Cc: prarit@redhat.com
Cc: feng.tang@intel.com
Cc: pmladek@suse.com
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: linux-s390@vger.kernel.org
Cc: boris.ostrovsky@oracle.com
Cc: jgross@suse.com
Cc: pbonzini@redhat.com
Link: https://lkml.kernel.org/r/20180720080911.GM2494@hirez.programming.kicks-ass.net
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/clock.c | 16 |
1 files changed, 6 insertions, 10 deletions
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c index c5c47ad3f386..811a39aca1ce 100644 --- a/kernel/sched/clock.c +++ b/kernel/sched/clock.c @@ -197,13 +197,14 @@ void clear_sched_clock_stable(void) static void __sched_clock_gtod_offset(void) { - __gtod_offset = (sched_clock() + __sched_clock_offset) - ktime_get_ns(); + struct sched_clock_data *scd = this_scd(); + + __scd_stamp(scd); + __gtod_offset = (scd->tick_raw + __sched_clock_offset) - scd->tick_gtod; } void __init sched_clock_init(void) { - unsigned long flags; - /* * Set __gtod_offset such that once we mark sched_clock_running, * sched_clock_tick() continues where sched_clock() left off. @@ -211,16 +212,11 @@ void __init sched_clock_init(void) * Even if TSC is buggered, we're still UP at this point so it * can't really be out of sync. */ - local_irq_save(flags); + local_irq_disable(); __sched_clock_gtod_offset(); - local_irq_restore(flags); + local_irq_enable(); static_branch_inc(&sched_clock_running); - - /* Now that sched_clock_running is set adjust scd */ - local_irq_save(flags); - sched_clock_tick(); - local_irq_restore(flags); } /* * We run this as late_initcall() such that it runs after all built-in drivers, |