summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
diff options
context:
space:
mode:
authorBharata B Rao <bharata@linux.ibm.com>2020-12-15 04:04:40 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2020-12-15 21:13:38 +0100
commit045ab8c9487ba099eade6578621e2af4a0d5ba0c (patch)
tree54b9cc39a843c5c5d6566ba6d2714822b1a16973 /mm/slub.c
parentmm, slub: use kmem_cache_debug_flags() in deactivate_slab() (diff)
downloadlinux-045ab8c9487ba099eade6578621e2af4a0d5ba0c.tar.xz
linux-045ab8c9487ba099eade6578621e2af4a0d5ba0c.zip
mm/slub: let number of online CPUs determine the slub page order
The page order of the slab that gets chosen for a given slab cache depends on the number of objects that can be fit in the slab while meeting other requirements. We start with a value of minimum objects based on nr_cpu_ids that is driven by possible number of CPUs and hence could be higher than the actual number of CPUs present in the system. This leads to calculate_order() chosing a page order that is on the higher side leading to increased slab memory consumption on systems that have bigger page sizes. Hence rely on the number of online CPUs when determining the mininum objects, thereby increasing the chances of chosing a lower conservative page order for the slab. Vlastimil said: "Ideally, we would react to hotplug events and update existing caches accordingly. But for that, recalculation of order for existing caches would have to be made safe, while not affecting hot paths. We have removed the sysfs interface with 32a6f409b693 ("mm, slub: remove runtime allocation order changes") as it didn't seem easy and worth the trouble. In case somebody wants to start with a large order right from the boot because they know they will hotplug lots of cpus later, they can use slub_min_objects= boot param to override this heuristic. So in case this change regresses somebody's performance, there's a way around it and thus the risk is low IMHO" Link: https://lkml.kernel.org/r/20201118082759.1413056-1-bharata@linux.ibm.com Signed-off-by: Bharata B Rao <bharata@linux.ibm.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Roman Gushchin <guro@fb.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c
index 79afc8a38ebf..6326b98c2164 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3431,7 +3431,7 @@ static inline int calculate_order(unsigned int size)
*/
min_objects = slub_min_objects;
if (!min_objects)
- min_objects = 4 * (fls(nr_cpu_ids) + 1);
+ min_objects = 4 * (fls(num_online_cpus()) + 1);
max_objects = order_objects(slub_max_order, size);
min_objects = min(min_objects, max_objects);