diff options
author | Patrick Wang <patrick.wang.shcn@gmail.com> | 2022-07-05 13:31:58 +0200 |
---|---|---|
committer | akpm <akpm@linux-foundation.org> | 2022-07-18 02:14:47 +0200 |
commit | a317ebccaa3609917a2c021af870cf3fa607ab0c (patch) | |
tree | 611a57a7aedb9881148552c53187f9269ff2f595 /mm/percpu.c | |
parent | mm/mprotect: remove the redundant initialization for error (diff) | |
download | linux-a317ebccaa3609917a2c021af870cf3fa607ab0c.tar.xz linux-a317ebccaa3609917a2c021af870cf3fa607ab0c.zip |
mm: percpu: use kmemleak_ignore_phys() instead of kmemleak_free()
Kmemleak recently added a rbtree to store the objects allocted with
physical address. Those objects can't be freed with kmemleak_free().
According to the comments, percpu allocations are tracked by kmemleak
separately. Kmemleak_free() was used to avoid the unnecessary
tracking. If kmemleak_free() fails, those objects would be scanned by
kmemleak, which is unnecessary but shouldn't lead to other effects.
Use kmemleak_ignore_phys() instead of kmemleak_free() for those
objects.
Link: https://lkml.kernel.org/r/20220705113158.127600-1-patrick.wang.shcn@gmail.com
Fixes: 0c24e061196c ("mm: kmemleak: add rbtree and store physical address for objects allocated with PA")
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/percpu.c')
-rw-r--r-- | mm/percpu.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/percpu.c b/mm/percpu.c index 3633eeefaa0d..27697b2429c2 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -3104,7 +3104,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size, goto out_free_areas; } /* kmemleak tracks the percpu allocations separately */ - kmemleak_free(ptr); + kmemleak_ignore_phys(__pa(ptr)); areas[group] = ptr; base = min(ptr, base); @@ -3304,7 +3304,7 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t goto enomem; } /* kmemleak tracks the percpu allocations separately */ - kmemleak_free(ptr); + kmemleak_ignore_phys(__pa(ptr)); pages[j++] = virt_to_page(ptr); } } @@ -3417,7 +3417,7 @@ void __init setup_per_cpu_areas(void) if (!ai || !fc) panic("Failed to allocate memory for percpu areas."); /* kmemleak tracks the percpu allocations separately */ - kmemleak_free(fc); + kmemleak_ignore_phys(__pa(fc)); ai->dyn_size = unit_size; ai->unit_size = unit_size; |