diff options
author | Marco Elver <elver@google.com> | 2019-07-12 05:54:18 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-07-12 20:05:42 +0200 |
commit | 0d4ca4c9bab397b525c9a4f875d31410ce4bc738 (patch) | |
tree | 7b7cc749aaa40c923e14870def61b57d56564732 /mm/slab_common.c | |
parent | mm/slab: refactor common ksize KASAN logic into slab_common.c (diff) | |
download | linux-0d4ca4c9bab397b525c9a4f875d31410ce4bc738.tar.xz linux-0d4ca4c9bab397b525c9a4f875d31410ce4bc738.zip |
mm/kasan: add object validation in ksize()
ksize() has been unconditionally unpoisoning the whole shadow memory
region associated with an allocation. This can lead to various undetected
bugs, for example, double-kzfree().
Specifically, kzfree() uses ksize() to determine the actual allocation
size, and subsequently zeroes the memory. Since ksize() used to just
unpoison the whole shadow memory region, no invalid free was detected.
This patch addresses this as follows:
1. Add a check in ksize(), and only then unpoison the memory region.
2. Preserve kasan_unpoison_slab() semantics by explicitly unpoisoning
the shadow memory region using the size obtained from __ksize().
Tested:
1. With SLAB allocator: a) normal boot without warnings; b) verified the
added double-kzfree() is detected.
2. With SLUB allocator: a) normal boot without warnings; b) verified the
added double-kzfree() is detected.
[elver@google.com: s/BUG_ON/WARN_ON_ONCE/, per Kees]
Link: http://lkml.kernel.org/r/20190627094445.216365-6-elver@google.com
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199359
Link: http://lkml.kernel.org/r/20190626142014.141844-6-elver@google.com
Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab_common.c')
-rw-r--r-- | mm/slab_common.c | 22 |
1 files changed, 21 insertions, 1 deletions
diff --git a/mm/slab_common.c b/mm/slab_common.c index b7c6a40e436a..a09bb10aa026 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1613,7 +1613,27 @@ EXPORT_SYMBOL(kzfree); */ size_t ksize(const void *objp) { - size_t size = __ksize(objp); + size_t size; + + if (WARN_ON_ONCE(!objp)) + return 0; + /* + * We need to check that the pointed to object is valid, and only then + * unpoison the shadow memory below. We use __kasan_check_read(), to + * generate a more useful report at the time ksize() is called (rather + * than later where behaviour is undefined due to potential + * use-after-free or double-free). + * + * If the pointed to memory is invalid we return 0, to avoid users of + * ksize() writing to and potentially corrupting the memory region. + * + * We want to perform the check before __ksize(), to avoid potentially + * crashing in __ksize() due to accessing invalid metadata. + */ + if (unlikely(objp == ZERO_SIZE_PTR) || !__kasan_check_read(objp, 1)) + return 0; + + size = __ksize(objp); /* * We assume that ksize callers could use whole allocated area, * so we need to unpoison this area. |