diff options
author | Will Deacon <will.deacon@arm.com> | 2015-10-06 19:46:24 +0200 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2015-10-07 12:55:41 +0200 |
commit | 5aec715d7d3122f77cabaa7578d9d25a0c1ed20e (patch) | |
tree | 8d75ae3f1f72bfa8ee77fdea406b6c9dcfaf4e60 /arch/arm64/kernel/trace-events-emulation.h | |
parent | arm64: flush: use local TLB and I-cache invalidation (diff) | |
download | linux-5aec715d7d3122f77cabaa7578d9d25a0c1ed20e.tar.xz linux-5aec715d7d3122f77cabaa7578d9d25a0c1ed20e.zip |
arm64: mm: rewrite ASID allocator and MM context-switching code
Our current switch_mm implementation suffers from a number of problems:
(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event
(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch
(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically
(4) We take a global spinlock (cpu_asid_lock) during context-switch
(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)
This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/kernel/trace-events-emulation.h')
0 files changed, 0 insertions, 0 deletions