diff options
author | Christophe Leroy <christophe.leroy@c-s.fr> | 2019-08-16 07:41:43 +0200 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2019-08-20 13:22:14 +0200 |
commit | f49f4e2b68b683491263e92c229ff344d44759a7 (patch) | |
tree | 28107e7aa2ed7e8314f7733ea7fc5b1d6158ced2 /arch/powerpc/mm/book3s32/mmu.c | |
parent | powerpc/mm: move update_mmu_cache() into book3s hash utils. (diff) | |
download | linux-f49f4e2b68b683491263e92c229ff344d44759a7.tar.xz linux-f49f4e2b68b683491263e92c229ff344d44759a7.zip |
powerpc/mm: Simplify update_mmu_cache() on BOOK3S32
On BOOK3S32, hash_preload() neither use is_exec nor trap,
so drop those parameters and simplify update_mmu_cached().
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/35f143c6fe29f9fd25c7f3cd4448ae401029ce3c.1565933217.git.christophe.leroy@c-s.fr
Diffstat (limited to 'arch/powerpc/mm/book3s32/mmu.c')
-rw-r--r-- | arch/powerpc/mm/book3s32/mmu.c | 30 |
1 files changed, 7 insertions, 23 deletions
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c index 1e77dca8b497..5d7d35eb96fb 100644 --- a/arch/powerpc/mm/book3s32/mmu.c +++ b/arch/powerpc/mm/book3s32/mmu.c @@ -297,8 +297,7 @@ void __init setbat(int index, unsigned long virt, phys_addr_t phys, /* * Preload a translation in the hash table */ -void hash_preload(struct mm_struct *mm, unsigned long ea, - bool is_exec, unsigned long trap) +void hash_preload(struct mm_struct *mm, unsigned long ea) { pmd_t *pmd; @@ -324,35 +323,20 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, * We don't need to worry about _PAGE_PRESENT here because we are * called with either mm->page_table_lock held or ptl lock held */ - unsigned long trap; - bool is_exec; /* We only want HPTEs for linux PTEs that have _PAGE_ACCESSED set */ if (!pte_young(*ptep) || address >= TASK_SIZE) return; - /* - * We try to figure out if we are coming from an instruction - * access fault and pass that down to __hash_page so we avoid - * double-faulting on execution of fresh text. We have to test - * for regs NULL since init will get here first thing at boot. - * - * We also avoid filling the hash if not coming from a fault. - */ + /* We have to test for regs NULL since init will get here first thing at boot */ + if (!current->thread.regs) + return; - trap = current->thread.regs ? TRAP(current->thread.regs) : 0UL; - switch (trap) { - case 0x300: - is_exec = false; - break; - case 0x400: - is_exec = true; - break; - default: + /* We also avoid filling the hash if not coming from a fault */ + if (TRAP(current->thread.regs) != 0x300 && TRAP(current->thread.regs) != 0x400) return; - } - hash_preload(vma->vm_mm, address, is_exec, trap); + hash_preload(vma->vm_mm, address); } /* |