diff options
author | Laibin Qiu <qiulaibin@huawei.com> | 2022-01-27 11:00:47 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2022-01-27 18:15:32 +0100 |
commit | 10825410b956dc1ed8c5fbc8bbedaffdadde7f20 (patch) | |
tree | 815182f8e069faa0ade6c37dd86beca376d825c6 /lib | |
parent | Merge tag 'nvme-5.17-2022-01-27' of git://git.infradead.org/nvme into block-5.17 (diff) | |
download | linux-10825410b956dc1ed8c5fbc8bbedaffdadde7f20.tar.xz linux-10825410b956dc1ed8c5fbc8bbedaffdadde7f20.zip |
blk-mq: Fix wrong wakeup batch configuration which will cause hang
Commit 180dccb0dba4f ("blk-mq: fix tag_get wait task can't be
awakened") will recalculate wake_batch when incrementing or decrementing
active_queues to avoid wake_batch > hctx_max_depth. At the same time, in
order to not affect performance as much as possible, the minimum wakeup
batch is set to 4. But when the QD is small (such as QD=1), if inc or dec
active_queues increases wakeup batch, that can lead to a hang:
Fix this problem with the following strategies:
QD : >= 32 | < 32
---------------------------------
wakeup batch: 8~4 | 3~1
Fixes: 180dccb0dba4f ("blk-mq: fix tag_get wait task can't be awakened")
Link: https://lore.kernel.org/linux-block/78cafe94-a787-e006-8851-69906f0c2128@huawei.com/T/#t
Reported-by: Alex Xu (Hello71) <alex_y_xu@yahoo.ca>
Signed-off-by: Laibin Qiu <qiulaibin@huawei.com>
Tested-by: Alex Xu (Hello71) <alex_y_xu@yahoo.ca>
Link: https://lore.kernel.org/r/20220127100047.1763746-1-qiulaibin@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'lib')
-rw-r--r-- | lib/sbitmap.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 6220fa67fb7e..09d293c30fd2 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -488,9 +488,13 @@ void sbitmap_queue_recalculate_wake_batch(struct sbitmap_queue *sbq, unsigned int users) { unsigned int wake_batch; + unsigned int min_batch; + unsigned int depth = (sbq->sb.depth + users - 1) / users; - wake_batch = clamp_val((sbq->sb.depth + users - 1) / - users, 4, SBQ_WAKE_BATCH); + min_batch = sbq->sb.depth >= (4 * SBQ_WAIT_QUEUES) ? 4 : 1; + + wake_batch = clamp_val(depth / SBQ_WAIT_QUEUES, + min_batch, SBQ_WAKE_BATCH); __sbitmap_queue_update_wake_batch(sbq, wake_batch); } EXPORT_SYMBOL_GPL(sbitmap_queue_recalculate_wake_batch); |