summaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorJens Axboe <axboe@fb.com>2014-10-07 16:39:20 +0200
committerJens Axboe <axboe@fb.com>2014-10-07 16:39:20 +0200
commitabab13b5c4fd1fec4f9a61622548012d93dc2831 (patch)
treee73fa24015b0a494fbe9dab8a1e9b3460c935b87 /block
parentblock: add bioset_create_nobvec() (diff)
downloadlinux-abab13b5c4fd1fec4f9a61622548012d93dc2831.tar.xz
linux-abab13b5c4fd1fec4f9a61622548012d93dc2831.zip
blk-mq: fix potential hang if rolling wakeup depth is too high
We currently divide the queue depth by 4 as our batch wakeup count, but we split the wakeups over BT_WAIT_QUEUES number of wait queues. This defaults to 8. If the product of the resulting batch wake count and BT_WAIT_QUEUES is higher than the device queue depth, we can get into a situation where a task goes to sleep waiting for a request, but never gets woken up. Reported-by: Bart Van Assche <bvanassche@acm.org> Fixes: 4bb659b156996 Cc: stable@kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'block')
-rw-r--r--block/blk-mq-tag.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index b08788086414..146fd02659ec 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -455,8 +455,8 @@ static void bt_update_count(struct blk_mq_bitmap_tags *bt,
}
bt->wake_cnt = BT_WAIT_BATCH;
- if (bt->wake_cnt > depth / 4)
- bt->wake_cnt = max(1U, depth / 4);
+ if (bt->wake_cnt > depth / BT_WAIT_QUEUES)
+ bt->wake_cnt = max(1U, depth / BT_WAIT_QUEUES);
bt->depth = depth;
}