diff options
author | Martin Schwidefsky <schwidefsky@de.ibm.com> | 2016-02-04 12:24:46 +0100 |
---|---|---|
committer | Martin Schwidefsky <schwidefsky@de.ibm.com> | 2016-02-23 08:56:17 +0100 |
commit | 007ccec53da35528bd06fa0063da55b1311054c1 (patch) | |
tree | a08ccb42cad189f7a4ca7bee843972a2227128bb /arch/s390/mm/pageattr.c | |
parent | s390/xor: optimized xor routing using the XC instruction (diff) | |
download | linux-007ccec53da35528bd06fa0063da55b1311054c1.tar.xz linux-007ccec53da35528bd06fa0063da55b1311054c1.zip |
s390/pageattr: do a single TLB flush for change_page_attr
The change of the access rights for an address range in the kernel
address space is currently done with a loop of IPTE + a store of the
modified PTE. Between the IPTE and the store the PTE will be invalid,
this intermediate state can cause problems with concurrent accesses.
Consider a change of a kernel area from read-write to read-only, a
concurrent reader of that area should be fine but with the invalid
PTE it might get an unexpected exception.
Remove the IPTEs for each PTE and do a global flush after all PTEs
have been modified.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'arch/s390/mm/pageattr.c')
-rw-r--r-- | arch/s390/mm/pageattr.c | 8 |
1 files changed, 3 insertions, 5 deletions
diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index 749c98407b41..f2a5c29a97e9 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -65,19 +65,17 @@ static pte_t *walk_page_table(unsigned long addr) static void change_page_attr(unsigned long addr, int numpages, pte_t (*set) (pte_t)) { - pte_t *ptep, pte; + pte_t *ptep; int i; for (i = 0; i < numpages; i++) { ptep = walk_page_table(addr); if (WARN_ON_ONCE(!ptep)) break; - pte = *ptep; - pte = set(pte); - __ptep_ipte(addr, ptep); - *ptep = pte; + *ptep = set(*ptep); addr += PAGE_SIZE; } + __tlb_flush_kernel(); } int set_memory_ro(unsigned long addr, int numpages) |