diff options
author | Will Deacon <will.deacon@arm.com> | 2013-12-17 19:17:54 +0100 |
---|---|---|
committer | Russell King <rmk+kernel@arm.linux.org.uk> | 2013-12-29 13:46:49 +0100 |
commit | 5d49750933210457ec2d5e8507823e365b2604fb (patch) | |
tree | a7894fc890c1ba6ab4da678505f1a8f60fd8d033 /arch/arm/mm | |
parent | ARM: 7925/1: mm: keep track of last ASID allocation to improve bitmap searching (diff) | |
download | linux-5d49750933210457ec2d5e8507823e365b2604fb.tar.xz linux-5d49750933210457ec2d5e8507823e365b2604fb.zip |
ARM: 7926/1: mm: flesh out and fix the comments in the ASID allocator
The ASID allocator has to deal with some pretty horrible behaviours by
the CPU, so expand on some of the comments in there so I remember why
we can never allocate ASID zero to a userspace task.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch/arm/mm')
-rw-r--r-- | arch/arm/mm/context.c | 16 |
1 files changed, 10 insertions, 6 deletions
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c index 52e6f13ac9c7..6eb97b3a7481 100644 --- a/arch/arm/mm/context.c +++ b/arch/arm/mm/context.c @@ -36,8 +36,8 @@ * The context ID is used by debuggers and trace logic, and * should be unique within all running processes. * - * In big endian operation, the two 32 bit words are swapped if accesed by - * non 64-bit operations. + * In big endian operation, the two 32 bit words are swapped if accessed + * by non-64-bit operations. */ #define ASID_FIRST_VERSION (1ULL << ASID_BITS) #define NUM_USER_ASIDS ASID_FIRST_VERSION @@ -195,8 +195,11 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) * Allocate a free ASID. If we can't find one, take a * note of the currently active ASIDs and mark the TLBs * as requiring flushes. We always count from ASID #1, - * as we reserve ASID #0 to switch via TTBR0 and indicate - * rollover events. + * as we reserve ASID #0 to switch via TTBR0 and to + * avoid speculative page table walks from hitting in + * any partial walk caches, which could be populated + * from overlapping level-1 descriptors used to map both + * the module area and the userspace stack. */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); if (asid == NUM_USER_ASIDS) { @@ -224,8 +227,9 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk) __check_vmalloc_seq(mm); /* - * Required during context switch to avoid speculative page table - * walking with the wrong TTBR. + * We cannot update the pgd and the ASID atomicly with classic + * MMU, so switch exclusively to global mappings to avoid + * speculative page table walking with the wrong TTBR. */ cpu_set_reserved_ttbr0(); |