summaryrefslogtreecommitdiffstats
path: root/include/asm-generic/bugs.h
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2020-02-04 02:36:53 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2020-02-04 04:05:26 +0100
commit0758cd8304942292e95a0f750c374533db378b32 (patch)
treefdfe709ddee4d1781db6a36fbdea1c31a60ebc01 /include/asm-generic/bugs.h
parentmm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush (diff)
downloadlinux-0758cd8304942292e95a0f750c374533db378b32.tar.xz
linux-0758cd8304942292e95a0f750c374533db378b32.zip
asm-generic/tlb: avoid potential double flush
Aneesh reported that: tlb_flush_mmu() tlb_flush_mmu_tlbonly() tlb_flush() <-- #1 tlb_flush_mmu_free() tlb_table_flush() tlb_table_invalidate() tlb_flush_mmu_tlbonly() tlb_flush() <-- #2 does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not clear tlb->end in that case. Observe that any caller to __tlb_adjust_range() also sets at least one of the tlb->freed_tables || tlb->cleared_p* bits, and those are unconditionally cleared by __tlb_reset_range(). Change the condition for actually issuing TLBI to having one of those bits set, as opposed to having tlb->end != 0. Link: http://lkml.kernel.org/r/20200116064531.483522-4-aneesh.kumar@linux.ibm.com Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reported-by: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/asm-generic/bugs.h')
0 files changed, 0 insertions, 0 deletions