diff options
author | Kefeng Wang <wangkefeng.wang@huawei.com> | 2024-04-03 10:38:03 +0200 |
---|---|---|
committer | Palmer Dabbelt <palmer@rivosinc.com> | 2024-05-23 01:12:56 +0200 |
commit | 4c6c0020427a4547845a83f7e4d6085e16c3e24f (patch) | |
tree | bbdc6af233a0483e0b4b4bb160e7aa608bf1ba66 | |
parent | riscv: uaccess: Relax the threshold for fast path (diff) | |
download | linux-4c6c0020427a4547845a83f7e4d6085e16c3e24f.tar.xz linux-4c6c0020427a4547845a83f7e4d6085e16c3e24f.zip |
riscv: mm: accelerate pagefault when badaccess
The access_error() of vma already checked under per-VMA lock, if it
is a bad access, directly handle error, no need to retry with mmap_lock
again. Since the page faut is handled under per-VMA lock, count it as
a vma lock event with VMA_LOCK_SUCCESS.
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Link: https://lore.kernel.org/r/20240403083805.1818160-6-wangkefeng.wang@huawei.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
-rw-r--r-- | arch/riscv/mm/fault.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 5224f3733802..b3fcf7d67efb 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -293,8 +293,8 @@ void handle_page_fault(struct pt_regs *regs) if (unlikely(access_error(cause, vma))) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_SUCCESS); - tsk->thread.bad_cause = cause; - bad_area_nosemaphore(regs, SEGV_ACCERR, addr); + tsk->thread.bad_cause = SEGV_ACCERR; + bad_area_nosemaphore(regs, code, addr); return; } |