summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/patching.c
diff options
context:
space:
mode:
authorGuo Ren <guoren@linux.alibaba.com>2022-04-07 09:33:20 +0200
committerWill Deacon <will@kernel.org>2022-04-08 12:43:46 +0200
commit31a099dbd91e69fcab55eef4be15ed7a8c984918 (patch)
tree8822babc29f92bb2327da5bb31cf2286901fa73f /arch/arm64/kernel/patching.c
parenttlb: hugetlb: Add more sizes to tlb_remove_huge_tlb_entry (diff)
downloadlinux-31a099dbd91e69fcab55eef4be15ed7a8c984918.tar.xz
linux-31a099dbd91e69fcab55eef4be15ed7a8c984918.zip
arm64: patch_text: Fixup last cpu should be master
These patch_text implementations are using stop_machine_cpuslocked infrastructure with atomic cpu_count. The original idea: When the master CPU patch_text, the others should wait for it. But current implementation is using the first CPU as master, which couldn't guarantee the remaining CPUs are waiting. This patch changes the last CPU as the master to solve the potential risk. Fixes: ae16480785de ("arm64: introduce interfaces to hotpatch kernel and module code") Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220407073323.743224-2-guoren@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to '')
-rw-r--r--arch/arm64/kernel/patching.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 771f543464e0..33e0fabc0b79 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
- /* The first CPU becomes master */
- if (atomic_inc_return(&pp->cpu_count) == 1) {
+ /* The last CPU becomes master */
+ if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);