diff options
author | Christoph Lameter <clameter@sgi.com> | 2008-04-14 18:13:29 +0200 |
---|---|---|
committer | Pekka Enberg <penberg@cs.helsinki.fi> | 2008-04-27 17:28:40 +0200 |
commit | c124f5b54f879e5870befcc076addbd5d614663f (patch) | |
tree | bedc4ce7ae68ea6de4c8aa6696b30801fedb15f6 /mm/slub.c | |
parent | slub: Calculate min_objects based on number of processors. (diff) | |
download | linux-c124f5b54f879e5870befcc076addbd5d614663f.tar.xz linux-c124f5b54f879e5870befcc076addbd5d614663f.zip |
slub: pack objects denser
Since we now have more orders available use a denser packing.
Increase slab order if more than 1/16th of a slab would be wasted.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/slub.c b/mm/slub.c index e2e6ba7a5172..d821ce6fff39 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1818,7 +1818,7 @@ static int slub_nomerge; * system components. Generally order 0 allocations should be preferred since * order 0 does not cause fragmentation in the page allocator. Larger objects * be problematic to put into order 0 slabs because there may be too much - * unused space left. We go to a higher order if more than 1/8th of the slab + * unused space left. We go to a higher order if more than 1/16th of the slab * would be wasted. * * In order to reach satisfactory performance we must ensure that a minimum @@ -1883,7 +1883,7 @@ static inline int calculate_order(int size) if (!min_objects) min_objects = 4 * (fls(nr_cpu_ids) + 1); while (min_objects > 1) { - fraction = 8; + fraction = 16; while (fraction >= 4) { order = slab_order(size, min_objects, slub_max_order, fraction); |