diff options
author | Lin Feng <linf@wangsu.com> | 2020-12-15 04:11:19 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-12-15 21:13:44 +0100 |
commit | ba8f3587f55667c688acd7c5103c870983e294dd (patch) | |
tree | 9a7913caeed27034df2e6019dcf20655aa0b714c /init | |
parent | mm/page_alloc: clear all pages in post_alloc_hook() with init_on_alloc=1 (diff) | |
download | linux-ba8f3587f55667c688acd7c5103c870983e294dd.tar.xz linux-ba8f3587f55667c688acd7c5103c870983e294dd.zip |
init/main: fix broken buffer_init when DEFERRED_STRUCT_PAGE_INIT set
In the booting phase if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set,
we have following callchain:
start_kernel
...
mm_init
mem_init
memblock_free_all
reset_all_zones_managed_pages
free_low_memory_core_early
...
buffer_init
nr_free_buffer_pages
zone->managed_pages
...
rest_init
kernel_init
kernel_init_freeable
page_alloc_init_late
kthread_run(deferred_init_memmap, NODE_DATA(nid), "pgdatinit%d", nid);
wait_for_completion(&pgdat_init_all_done_comp);
...
files_maxfiles_init
It's clear that buffer_init depends on zone->managed_pages, but it's reset
in reset_all_zones_managed_pages after that pages are readded into
zone->managed_pages, but when buffer_init runs this process is half done
and most of them will finally be added till deferred_init_memmap done. In
large memory couting of nr_free_buffer_pages drifts too much, also
drifting from kernels to kernels on same hardware.
Fix is simple, it delays buffer_init run till deferred_init_memmap all
done.
But as corrected by this patch, max_buffer_heads becomes very large, the
value is roughly as many as 4 times of totalram_pages, formula:
max_buffer_heads = nrpages * (10%) * (PAGE_SIZE / sizeof(struct
buffer_head));
Say in a 64GB memory box we have 16777216 pages, then max_buffer_heads
turns out to be roughly 67,108,864. In common cases, should a buffer_head
be mapped to one page/block(4KB)? So max_buffer_heads never exceeds
totalram_pages. IMO it's likely to make buffer_heads_over_limit bool
value alwasy false, then make codes 'if (buffer_heads_over_limit)' test in
vmscan unnecessary.
So this patch will change the original behavior related to
buffer_heads_over_limit in vmscan since we used a half done value of
zone->managed_pages before, or should we use a smaller factor(<10%) in
previous formula.
akpm: I think this is OK - the max_buffer_heads code is only needed on
highmem machines, to prevent ZONE_NORMAL from being consumed by large
amounts of buffer_heads attached to highmem pagecache. This problem will
not occur on 64-bit machines, so this feature's non-functionality on such
machines is a feature, not a bug.
Link: https://lkml.kernel.org/r/20201123110500.103523-1-linf@wangsu.com
Signed-off-by: Lin Feng <linf@wangsu.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'init')
-rw-r--r-- | init/main.c | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/init/main.c b/init/main.c index 82ae0d1345a0..404366e1a64d 100644 --- a/init/main.c +++ b/init/main.c @@ -58,7 +58,6 @@ #include <linux/rmap.h> #include <linux/mempolicy.h> #include <linux/key.h> -#include <linux/buffer_head.h> #include <linux/page_ext.h> #include <linux/debug_locks.h> #include <linux/debugobjects.h> @@ -1036,7 +1035,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) fork_init(); proc_caches_init(); uts_ns_init(); - buffer_init(); key_init(); security_init(); dbg_late_init(); |