diff options
author | Peter Zijlstra <peterz@infradead.org> | 2018-09-19 13:24:41 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2019-04-03 10:32:47 +0200 |
commit | 96bc9567cbe112e9320250f01b9c060c882e8619 (patch) | |
tree | 7948ca705fc25538b784b4f28dd62a2c5ec88b97 /mm/mmu_gather.c | |
parent | asm-generic/tlb, ia64: Conditionally provide tlb_migrate_finish() (diff) | |
download | linux-96bc9567cbe112e9320250f01b9c060c882e8619.tar.xz linux-96bc9567cbe112e9320250f01b9c060c882e8619.zip |
asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE
Make issuing a TLB invalidate for page-table pages the normal case.
The reason is twofold:
- too many invalidates is safer than too few,
- most architectures use the linux page-tables natively
and would thus require this.
Make it an opt-out, instead of an opt-in.
No change in behavior intended.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'mm/mmu_gather.c')
-rw-r--r-- | mm/mmu_gather.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 14dfc97155e4..2a5322d52b0a 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -157,7 +157,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE +#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE /* * Invalidate page-table caches used by hardware walkers. Then we still * need to RCU-sched wait while freeing the pages because software |