diff options
author | Joonsoo Kim <js1304@gmail.com> | 2015-09-05 00:45:54 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2015-09-05 01:54:41 +0200 |
commit | 45eb00cd3a034b8448f52fd9074e9b2b11d857c1 (patch) | |
tree | c694e55934c3548f2b22350b52473ef3ba32aada /mm | |
parent | mm/slub: fix slab double-free in case of duplicate sysfs filename (diff) | |
download | linux-45eb00cd3a034b8448f52fd9074e9b2b11d857c1.tar.xz linux-45eb00cd3a034b8448f52fd9074e9b2b11d857c1.zip |
mm/slub: don't wait for high-order page allocation
Description is almost copied from commit fb05e7a89f50 ("net: don't wait
for order-3 page allocation").
I saw excessive direct memory reclaim/compaction triggered by slub. This
causes performance issues and add latency. Slub uses high-order
allocation to reduce internal fragmentation and management overhead. But,
direct memory reclaim/compaction has high overhead and the benefit of
high-order allocation can't compensate the overhead of both work.
This patch makes auxiliary high-order allocation atomic. If there is no
memory pressure and memory isn't fragmented, the alloction will still
success, so we don't sacrifice high-order allocation's benefit here. If
the atomic allocation fails, direct memory reclaim/compaction will not be
triggered, allocation fallback to low-order immediately, hence the direct
memory reclaim/compaction overhead is avoided. In the allocation failure
case, kswapd is waken up and trying to make high-order freepages, so
allocation could success next time.
Following is the test to measure effect of this patch.
System: QEMU, CPU 8, 512 MB
Mem: 25% memory is allocated at random position to make fragmentation.
Memory-hogger occupies 150 MB memory.
Workload: hackbench -g 20 -l 1000
Average result by 10 runs (Base va Patched)
elapsed_time(s): 4.3468 vs 2.9838
compact_stall: 461.7 vs 73.6
pgmigrate_success: 28315.9 vs 7256.1
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Shaohua Li <shli@fb.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 7e9e508263fb..084184e706c6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1362,6 +1362,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) * so we fall-back to the minimum order allocation. */ alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL; + if ((alloc_gfp & __GFP_WAIT) && oo_order(oo) > oo_order(s->min)) + alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~__GFP_WAIT; page = alloc_slab_page(s, alloc_gfp, node, oo); if (unlikely(!page)) { |