diff options
author | Jann Horn <jannh@google.com> | 2024-08-09 17:36:56 +0200 |
---|---|---|
committer | Vlastimil Babka <vbabka@suse.cz> | 2024-08-27 14:12:51 +0200 |
commit | b8c8ba73c68bb3c3e9dad22f488b86c540c839f9 (patch) | |
tree | f2deac9d7c013e58efa585debb67409ed5ec20bb /mm/slab_common.c | |
parent | kasan: catch invalid free before SLUB reinitializes the object (diff) | |
download | linux-b8c8ba73c68bb3c3e9dad22f488b86c540c839f9.tar.xz linux-b8c8ba73c68bb3c3e9dad22f488b86c540c839f9.zip |
slub: Introduce CONFIG_SLUB_RCU_DEBUG
Currently, KASAN is unable to catch use-after-free in SLAB_TYPESAFE_BY_RCU
slabs because use-after-free is allowed within the RCU grace period by
design.
Add a SLUB debugging feature which RCU-delays every individual
kmem_cache_free() before either actually freeing the object or handing it
off to KASAN, and change KASAN to poison freed objects as normal when this
option is enabled.
For now I've configured Kconfig.debug to default-enable this feature in the
KASAN GENERIC and SW_TAGS modes; I'm not enabling it by default in HW_TAGS
mode because I'm not sure if it might have unwanted performance degradation
effects there.
Note that this is mostly useful with KASAN in the quarantine-based GENERIC
mode; SLAB_TYPESAFE_BY_RCU slabs are basically always also slabs with a
->ctor, and KASAN's assign_tag() currently has to assign fixed tags for
those, reducing the effectiveness of SW_TAGS/HW_TAGS mode.
(A possible future extension of this work would be to also let SLUB call
the ->ctor() on every allocation instead of only when the slab page is
allocated; then tag-based modes would be able to assign new tags on every
reallocation.)
Tested-by: syzbot+263726e59eab6b442723@syzkaller.appspotmail.com
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Acked-by: Marco Elver <elver@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz> #slab
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'mm/slab_common.c')
-rw-r--r-- | mm/slab_common.c | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/mm/slab_common.c b/mm/slab_common.c index 1a2873293f5d..884e8e70a56d 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -511,6 +511,22 @@ void kmem_cache_destroy(struct kmem_cache *s) /* in-flight kfree_rcu()'s may include objects from our cache */ kvfree_rcu_barrier(); + if (IS_ENABLED(CONFIG_SLUB_RCU_DEBUG) && + (s->flags & SLAB_TYPESAFE_BY_RCU)) { + /* + * Under CONFIG_SLUB_RCU_DEBUG, when objects in a + * SLAB_TYPESAFE_BY_RCU slab are freed, SLUB will internally + * defer their freeing with call_rcu(). + * Wait for such call_rcu() invocations here before actually + * destroying the cache. + * + * It doesn't matter that we haven't looked at the slab refcount + * yet - slabs with SLAB_TYPESAFE_BY_RCU can't be merged, so + * the refcount should be 1 here. + */ + rcu_barrier(); + } + cpus_read_lock(); mutex_lock(&slab_mutex); |