diff options
author | Chengming Zhou <zhouchengming@bytedance.com> | 2023-11-02 04:23:25 +0100 |
---|---|---|
committer | Vlastimil Babka <vbabka@suse.cz> | 2023-12-04 17:54:53 +0100 |
commit | 422e7d54375889484b66962d1dcbc392a6bd9e7a (patch) | |
tree | aa9ec34a4aa0af50269864d2e5558e8498e4d1be /mm/slub.c | |
parent | slub: Keep track of whether slub is on the per-node partial list (diff) | |
download | linux-422e7d54375889484b66962d1dcbc392a6bd9e7a.tar.xz linux-422e7d54375889484b66962d1dcbc392a6bd9e7a.zip |
slub: Prepare __slab_free() for unfrozen partial slab out of node partial list
Now the partially empty slub will be frozen when taken out of node partial
list, so the __slab_free() will know from "was_frozen" that the partially
empty slab is not on node partial list and is a cpu or cpu partial slab
of some cpu.
But we will change this, make partial slabs leave the node partial list
with unfrozen state, so we need to change __slab_free() to use the new
slab_test_node_partial() we just introduced.
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 6efcbf79fd2d..18f18fbbd97e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3631,6 +3631,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; + bool on_node_partial; stat(s, FREE_SLOWPATH); @@ -3678,6 +3679,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, */ spin_lock_irqsave(&n->list_lock, flags); + on_node_partial = slab_test_node_partial(slab); } } @@ -3706,6 +3708,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, return; } + /* + * This slab was partially empty but not on the per-node partial list, + * in which case we shouldn't manipulate its list, just return. + */ + if (prior && !on_node_partial) { + spin_unlock_irqrestore(&n->list_lock, flags); + return; + } + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) goto slab_empty; |