diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2018-09-06 15:21:38 +0200 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2018-09-06 15:21:38 +0200 |
commit | 69fa6eb7d6a64801ea261025cce9723d9442d773 (patch) | |
tree | 6e6c0c8a1368d6e6e3ca5cf051161bad6450e094 /kernel | |
parent | cpu/hotplug: Adjust misplaced smb() in cpuhp_thread_fun() (diff) | |
download | linux-69fa6eb7d6a64801ea261025cce9723d9442d773.tar.xz linux-69fa6eb7d6a64801ea261025cce9723d9442d773.zip |
cpu/hotplug: Prevent state corruption on error rollback
When a teardown callback fails, the CPU hotplug code brings the CPU back to
the previous state. The previous state becomes the new target state. The
rollback happens in undo_cpu_down() which increments the state
unconditionally even if the state is already the same as the target.
As a consequence the next CPU hotplug operation will start at the wrong
state. This is easily to observe when __cpu_disable() fails.
Prevent the unconditional undo by checking the state vs. target before
incrementing state and fix up the consequently wrong conditional in the
unplug code which handles the failure of the final CPU take down on the
control CPU side.
Fixes: 4dddfb5faa61 ("smp/hotplug: Rewrite AP state machine core")
Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: josh@joshtriplett.org
Cc: peterz@infradead.org
Cc: jiangshanlai@gmail.com
Cc: dzickus@redhat.com
Cc: brendan.jackman@arm.com
Cc: malat@debian.org
Cc: sramana@codeaurora.org
Cc: linux-arm-msm@vger.kernel.org
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1809051419580.1416@nanos.tec.linutronix.de
----
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/cpu.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/cpu.c b/kernel/cpu.c index eb4041f78073..0097acec1c71 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -916,7 +916,8 @@ static int cpuhp_down_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st, ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL); if (ret) { st->target = prev_state; - undo_cpu_down(cpu, st); + if (st->state < prev_state) + undo_cpu_down(cpu, st); break; } } @@ -969,7 +970,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen, * to do the further cleanups. */ ret = cpuhp_down_callbacks(cpu, st, target); - if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) { + if (ret && st->state == CPUHP_TEARDOWN_CPU && st->state < prev_state) { cpuhp_reset_state(st, prev_state); __cpuhp_kick_ap(st); } |