diff options
author | Fabio De Francesco <fabio.maria.de.francesco@linux.intel.com> | 2023-11-20 15:15:27 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-12-11 01:51:49 +0100 |
commit | 2f7537620f383de121eaeb25f3e073a27831d086 (patch) | |
tree | f0a44ad4d4a190512bcac29138d8accbfb35b9f5 /mm/util.c | |
parent | mm: use vmem_altmap code without CONFIG_ZONE_DEVICE (diff) | |
download | linux-2f7537620f383de121eaeb25f3e073a27831d086.tar.xz linux-2f7537620f383de121eaeb25f3e073a27831d086.zip |
mm/util: use kmap_local_page() in memcmp_pages()
kmap_atomic() has been deprecated in favor of kmap_local_page().
Therefore, replace kmap_atomic() with kmap_local_page() in memcmp_pages().
kmap_atomic() is implemented like a kmap_local_page() which also disables
page-faults and preemption (the latter only in !PREEMPT_RT kernels). The
kernel virtual addresses returned by these two API are only valid in the
context of the callers (i.e., they cannot be handed to other threads).
With kmap_local_page() the mappings are per thread and CPU local like in
kmap_atomic(); however, they can handle page-faults and can be called from
any context (including interrupts). The tasks that call kmap_local_page()
can be preempted and, when they are scheduled to run again, the kernel
virtual addresses are restored and are still valid.
In memcmp_pages(), the block of code between the mapping and un-mapping
does not depend on the above-mentioned side effects of kmap_aatomic(), so
that mere replacements of the old API with the new one is all that is
required (i.e., there is no need to explicitly call pagefault_disable()
and/or preempt_disable()).
Link: https://lkml.kernel.org/r/20231120141554.6612-1-fmdefrancesco@gmail.com
Signed-off-by: Fabio M. De Francesco <fabio.maria.de.francesco@linux.intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/util.c')
-rw-r--r-- | mm/util.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/util.c b/mm/util.c index 744b4d7e3fae..5a6a9802583b 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1047,11 +1047,11 @@ int __weak memcmp_pages(struct page *page1, struct page *page2) char *addr1, *addr2; int ret; - addr1 = kmap_atomic(page1); - addr2 = kmap_atomic(page2); + addr1 = kmap_local_page(page1); + addr2 = kmap_local_page(page2); ret = memcmp(addr1, addr2, PAGE_SIZE); - kunmap_atomic(addr2); - kunmap_atomic(addr1); + kunmap_local(addr2); + kunmap_local(addr1); return ret; } |