summaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorOmar Sandoval <osandov@fb.com>2018-02-28 01:56:42 +0100
committerJens Axboe <axboe@kernel.dk>2018-02-28 20:23:35 +0100
commite9a99a638800af25c7ed006c96fd1dabb99254b7 (patch)
treed23b4989804cd48bb248142f9fa6370bd2700a2c /block
parentblk-mq-debugfs: Show zone locking information (diff)
downloadlinux-e9a99a638800af25c7ed006c96fd1dabb99254b7.tar.xz
linux-e9a99a638800af25c7ed006c96fd1dabb99254b7.zip
block: clear ctx pending bit under ctx lock
When we insert a request, we set the software queue pending bit while holding the software queue lock. However, we clear it outside of the lock, so it's possible that a concurrent insert could reset the bit after we clear it but before we empty the request list. Afterwards, the bit would still be set but the software queue wouldn't have any requests in it, leading us to do a spurious run in the future. This is mostly a benign/theoretical issue, but it makes the following change easier to justify. Signed-off-by: Omar Sandoval <osandov@fb.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r--block/blk-mq.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 16e83e6df404..9594a0e9f65b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -986,9 +986,9 @@ static bool flush_busy_ctx(struct sbitmap *sb, unsigned int bitnr, void *data)
struct blk_mq_hw_ctx *hctx = flush_data->hctx;
struct blk_mq_ctx *ctx = hctx->ctxs[bitnr];
- sbitmap_clear_bit(sb, bitnr);
spin_lock(&ctx->lock);
list_splice_tail_init(&ctx->rq_list, flush_data->list);
+ sbitmap_clear_bit(sb, bitnr);
spin_unlock(&ctx->lock);
return true;
}