summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJames Morse <james.morse@arm.com>2021-01-25 20:19:12 +0100
committerWill Deacon <will@kernel.org>2021-01-27 16:41:12 +0100
commit1401bef703a48cf79c674215f702a1435362beae (patch)
tree4cf112ae7b4d421531654a377f345aa036c4a102
parentarm64: trans_pgd: pass NULL instead of init_mm to *_populate functions (diff)
downloadlinux-1401bef703a48cf79c674215f702a1435362beae.tar.xz
linux-1401bef703a48cf79c674215f702a1435362beae.zip
arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()
Because only the idmap sets a non-standard T0SZ, __cpu_set_tcr_t0sz() can check for platforms that need to do this using __cpu_uses_extended_idmap() before doing its work. The idmap is only built with enough levels, (and T0SZ bits) to map its single page. To allow hibernate, and then kexec to idmap their single page copy routines, __cpu_set_tcr_t0sz() needs to consider additional users, who may need a different number of levels/T0SZ-bits to the idmap. (i.e. VA_BITS may be enough for the idmap, but not hibernate/kexec) Always read TCR_EL1, and check whether any work needs doing for this request. __cpu_uses_extended_idmap() remains as it is used by KVM, whose idmap is also part of the kernel image. This mostly affects the cpuidle path, where we now get an extra system register read . CC: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> CC: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/r/20210125191923.1060122-8-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
-rw-r--r--arch/arm64/include/asm/mmu_context.h7
1 files changed, 3 insertions, 4 deletions
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 0b3079fd28eb..70ce8c1d2b07 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -81,16 +81,15 @@ static inline bool __cpu_uses_extended_idmap_level(void)
}
/*
- * Set TCR.T0SZ to its default value (based on VA_BITS)
+ * Ensure TCR.T0SZ is set to the provided value.
*/
static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
{
- unsigned long tcr;
+ unsigned long tcr = read_sysreg(tcr_el1);
- if (!__cpu_uses_extended_idmap())
+ if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
return;
- tcr = read_sysreg(tcr_el1);
tcr &= ~TCR_T0SZ_MASK;
tcr |= t0sz << TCR_T0SZ_OFFSET;
write_sysreg(tcr, tcr_el1);