summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorDave Hansen <dave.hansen@linux.intel.com>2018-04-06 22:55:11 +0200
committerIngo Molnar <mingo@kernel.org>2018-04-12 09:05:58 +0200
commit1a54420aeb4da1ba5b28283aa5696898220c9a27 (patch)
tree96aeda4d590620c7fde97f61f5067fa73fcf55a4 /arch
parentx86/mm: Do not auto-massage page protections (diff)
downloadlinux-1a54420aeb4da1ba5b28283aa5696898220c9a27.tar.xz
linux-1a54420aeb4da1ba5b28283aa5696898220c9a27.zip
x86/mm: Remove extra filtering in pageattr code
The pageattr code has a mode where it can set or clear PTE bits in existing PTEs, so the page protections of the *new* PTEs come from one of two places: 1. The set/clear masks: cpa->mask_clr / cpa->mask_set 2. The existing PTE We filter ->mask_set/clr for supported PTE bits at entry to __change_page_attr() so we never need to filter them again. The only other place permissions can come from is an existing PTE and those already presumably have good bits. We do not need to filter them again. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nadav Amit <namit@vmware.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20180406205511.BC072352@viggo.jf.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86/mm/pageattr.c6
1 files changed, 2 insertions, 4 deletions
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index d3442dfdfced..968f51a2e39b 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -598,7 +598,6 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
req_prot = pgprot_clear_protnone_bits(req_prot);
if (pgprot_val(req_prot) & _PAGE_PRESENT)
pgprot_val(req_prot) |= _PAGE_PSE;
- req_prot = canon_pgprot(req_prot);
/*
* old_pfn points to the large page base pfn. So we need
@@ -718,7 +717,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
*/
pfn = ref_pfn;
for (i = 0; i < PTRS_PER_PTE; i++, pfn += pfninc)
- set_pte(&pbase[i], pfn_pte(pfn, canon_pgprot(ref_prot)));
+ set_pte(&pbase[i], pfn_pte(pfn, ref_prot));
if (virt_addr_valid(address)) {
unsigned long pfn = PFN_DOWN(__pa(address));
@@ -935,7 +934,6 @@ static void populate_pte(struct cpa_data *cpa,
pte = pte_offset_kernel(pmd, start);
pgprot = pgprot_clear_protnone_bits(pgprot);
- pgprot = canon_pgprot(pgprot);
while (num_pages-- && start < end) {
set_pte(pte, pfn_pte(cpa->pfn, pgprot));
@@ -1234,7 +1232,7 @@ repeat:
* after all we're only going to change it's attributes
* not the memory it points to
*/
- new_pte = pfn_pte(pfn, canon_pgprot(new_prot));
+ new_pte = pfn_pte(pfn, new_prot);
cpa->pfn = pfn;
/*
* Do we really change anything ?