summaryrefslogtreecommitdiffstats
path: root/arch/um
diff options
context:
space:
mode:
authorMike Rapoport <rppt@linux.ibm.com>2020-06-04 00:57:06 +0200
committerLinus Torvalds <torvalds@linux-foundation.org>2020-06-04 05:09:43 +0200
commitfa3354e4ea39e97af906c05551a36396541d70b4 (patch)
treead86764d19b4524fee6cd892ee4c7c5ed1bb76b9 /arch/um
parentmm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option (diff)
downloadlinux-fa3354e4ea39e97af906c05551a36396541d70b4.tar.xz
linux-fa3354e4ea39e97af906c05551a36396541d70b4.zip
mm: free_area_init: use maximal zone PFNs rather than zone sizes
Currently, architectures that use free_area_init() to initialize memory map and node and zone structures need to calculate zone and hole sizes. We can use free_area_init_nodes() instead and let it detect the zone boundaries while the architectures will only have to supply the possible limits for the zones. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64] Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200412194859.12663-5-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/um')
-rw-r--r--arch/um/kernel/mem.c12
1 files changed, 4 insertions, 8 deletions
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 30885d0b94ac..401b22f14743 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -158,8 +158,8 @@ static void __init fixaddr_user_init( void)
void __init paging_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES], vaddr;
- int i;
+ unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
+ unsigned long vaddr;
empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE,
PAGE_SIZE);
@@ -167,12 +167,8 @@ void __init paging_init(void)
panic("%s: Failed to allocate %lu bytes align=%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
- for (i = 0; i < ARRAY_SIZE(zones_size); i++)
- zones_size[i] = 0;
-
- zones_size[ZONE_NORMAL] = (end_iomem >> PAGE_SHIFT) -
- (uml_physmem >> PAGE_SHIFT);
- free_area_init(zones_size);
+ max_zone_pfn[ZONE_NORMAL] = end_iomem >> PAGE_SHIFT;
+ free_area_init(max_zone_pfn);
/*
* Fixed mappings, only the page table structure has to be