diff options
author | Borislav Petkov <bp@suse.de> | 2017-09-07 11:38:37 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-09-07 11:53:11 +0200 |
commit | 21d9bb4a05bac50fb4f850517af4030baecd00f6 (patch) | |
tree | 729adf81c36b0c7d2745226f225b6b64fc655eb9 /arch | |
parent | x86/mm: Document how CR4.PCIDE restore works (diff) | |
download | linux-21d9bb4a05bac50fb4f850517af4030baecd00f6.tar.xz linux-21d9bb4a05bac50fb4f850517af4030baecd00f6.zip |
x86/mm: Make the SME mask a u64
The SME encryption mask is for masking 64-bit pagetable entries. It
being an unsigned long works fine on X86_64 but on 32-bit builds in
truncates bits leading to Xen guests crashing very early.
And regardless, the whole SME mask handling shouldnt've leaked into
32-bit because SME is X86_64-only feature. So, first make the mask u64.
And then, add trivial 32-bit versions of the __sme_* macros so that
nothing happens there.
Reported-and-tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Tom Lendacky <Thomas.Lendacky@amd.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas <Thomas.Lendacky@amd.com>
Fixes: 21729f81ce8a ("x86/mm: Provide general kernel support for memory encryption")
Link: http://lkml.kernel.org/r/20170907093837.76zojtkgebwtqc74@pd.tnic
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/include/asm/mem_encrypt.h | 4 | ||||
-rw-r--r-- | arch/x86/mm/mem_encrypt.c | 2 |
2 files changed, 3 insertions, 3 deletions
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 8e618fcf1f7c..6a77c63540f7 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,7 +21,7 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT -extern unsigned long sme_me_mask; +extern u64 sme_me_mask; void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, unsigned long decrypted_kernel_vaddr, @@ -49,7 +49,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size); #else /* !CONFIG_AMD_MEM_ENCRYPT */ -#define sme_me_mask 0UL +#define sme_me_mask 0ULL static inline void __init sme_early_encrypt(resource_size_t paddr, unsigned long size) { } diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 0fbd09269757..3fcc8e01683b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -37,7 +37,7 @@ static char sme_cmdline_off[] __initdata = "off"; * reside in the .data section so as not to be zeroed out when the .bss * section is later cleared. */ -unsigned long sme_me_mask __section(.data) = 0; +u64 sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); /* Buffer used for early in-place encryption by BSP, no locking needed */ |