summaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorChristoph Hellwig <hch@lst.de>2023-12-04 18:34:19 +0100
committerJens Axboe <axboe@kernel.dk>2023-12-15 15:34:27 +0100
commit6ef02df154a245a4a7c0a66daa5a353daa788dba (patch)
treed4572019dcc1ee48f3ef9ae0dd84c510c9157b32 /block
parentblock: prevent an integer overflow in bvec_try_merge_hw_page (diff)
downloadlinux-6ef02df154a245a4a7c0a66daa5a353daa788dba.tar.xz
linux-6ef02df154a245a4a7c0a66daa5a353daa788dba.zip
block: support adding less than len in bio_add_hw_page
bio_add_hw_page currently always fails or succeeds. This is fine for the existing callers that always add PAGE_SIZE worth given that the max_segment_size and max_sectors must always allow at least a page worth of data. But when we want to add it for bigger amounts of data this means it can also fail when adding the data to a bio, and creating a fallback for that becomes really annoying in the callers. Make use of the existing API design that allows to return a smaller length than the one passed in and add up to max_segment_size worth of data from a larger input. All the existing callers are fine with this - not because they handle this return correctly, but because they never pass more than a page in. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20231204173419.782378-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r--block/bio.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/block/bio.c b/block/bio.c
index 270f6b99926e..b9642a41f286 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -966,10 +966,13 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
struct page *page, unsigned int len, unsigned int offset,
unsigned int max_sectors, bool *same_page)
{
+ unsigned int max_size = max_sectors << SECTOR_SHIFT;
+
if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
return 0;
- if (((bio->bi_iter.bi_size + len) >> SECTOR_SHIFT) > max_sectors)
+ len = min3(len, max_size, queue_max_segment_size(q));
+ if (len > max_size - bio->bi_iter.bi_size)
return 0;
if (bio->bi_vcnt > 0) {