summaryrefslogtreecommitdiffstats
path: root/kernel/resource_kunit.c
diff options
context:
space:
mode:
authorPetr Tesarik <petr.tesarik.ext@huawei.com>2023-06-26 15:01:04 +0200
committerChristoph Hellwig <hch@lst.de>2023-06-29 07:10:28 +0200
commit8ac04063354a01a484d2e55d20ed1958aa0d3392 (patch)
treed3746823d2d24d64d9a4deb3679110e1c751c8aa /kernel/resource_kunit.c
parentswiotlb: always set the number of areas before allocating the pool (diff)
downloadlinux-8ac04063354a01a484d2e55d20ed1958aa0d3392.tar.xz
linux-8ac04063354a01a484d2e55d20ed1958aa0d3392.zip
swiotlb: reduce the number of areas to match actual memory pool size
Although the desired size of the SWIOTLB memory pool is increased in swiotlb_adjust_nareas() to match the number of areas, the actual allocation may be smaller, which may require reducing the number of areas. For example, Xen uses swiotlb_init_late(), which in turn uses the page allocator. On x86, page size is 4 KiB and MAX_ORDER is 10 (1024 pages), resulting in a maximum memory pool size of 4 MiB. This corresponds to 2048 slots of 2 KiB each. The minimum area size is 128 (IO_TLB_SEGSIZE), allowing at most 2048 / 128 = 16 areas. If num_possible_cpus() is greater than the maximum number of areas, areas are smaller than IO_TLB_SEGSIZE and contiguous groups of free slots will span multiple areas. When allocating and freeing slots, only one area will be properly locked, causing race conditions on the unlocked slots and ultimately data corruption, kernel hangs and crashes. Fixes: 20347fca71a3 ("swiotlb: split up the global swiotlb lock") Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Reviewed-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'kernel/resource_kunit.c')
0 files changed, 0 insertions, 0 deletions