summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm/pgtable-book3s64.c
diff options
context:
space:
mode:
authorReza Arbab <arbab@linux.vnet.ibm.com>2017-01-16 20:07:45 +0100
committerMichael Ellerman <mpe@ellerman.id.au>2017-01-31 03:54:19 +0100
commit4b5d62ca17a1cd2ffc8399e1d1c3ebbabf16e78f (patch)
treea24e47859c6703fc4092001cf8c98f0d074e97c7 /arch/powerpc/mm/pgtable-book3s64.c
parentpowerpc/mm: add radix__create_section_mapping() (diff)
downloadlinux-4b5d62ca17a1cd2ffc8399e1d1c3ebbabf16e78f.tar.xz
linux-4b5d62ca17a1cd2ffc8399e1d1c3ebbabf16e78f.zip
powerpc/mm: add radix__remove_section_mapping()
Tear down and free the four-level page tables of physical mappings during memory hotremove. Borrow the basic structure of remove_pagetable() and friends from the identically-named x86 functions. Reduce the frequency of tlb flushes and page_table_lock spinlocks by only doing them in the outermost function. There was some question as to whether the locking is needed at all. Leave it for now, but we could consider dropping it. Memory must be offline to be removed, thus not in use. So there shouldn't be the sort of concurrent page walking activity here that might prompt us to use RCU. Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/mm/pgtable-book3s64.c')
-rw-r--r--arch/powerpc/mm/pgtable-book3s64.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
index 2b13f6b87e25..b798ff674fab 100644
--- a/arch/powerpc/mm/pgtable-book3s64.c
+++ b/arch/powerpc/mm/pgtable-book3s64.c
@@ -139,7 +139,7 @@ int create_section_mapping(unsigned long start, unsigned long end)
int remove_section_mapping(unsigned long start, unsigned long end)
{
if (radix_enabled())
- return -ENODEV;
+ return radix__remove_section_mapping(start, end);
return hash__remove_section_mapping(start, end);
}