summaryrefslogtreecommitdiffstats
path: root/block/blk-core.c
diff options
context:
space:
mode:
authorMing Lei <tom.leiming@gmail.com>2017-03-27 14:06:58 +0200
committerJens Axboe <axboe@fb.com>2017-03-29 16:03:42 +0200
commitd3cfb2a0ac0b8487d28a1ee207c29617bf6e6820 (patch)
tree85c3948ceda4296641bebd97fe1fbd177481a14d /block/blk-core.c
parentblock: rename blk_mq_freeze_queue_start() (diff)
downloadlinux-d3cfb2a0ac0b8487d28a1ee207c29617bf6e6820.tar.xz
linux-d3cfb2a0ac0b8487d28a1ee207c29617bf6e6820.zip
block: block new I/O just after queue is set as dying
Before commit 780db2071a(blk-mq: decouble blk-mq freezing from generic bypassing), the dying flag is checked before entering queue, and Tejun converts the checking into .mq_freeze_depth, and assumes the counter is increased just after dying flag is set. Unfortunately we doesn't do that in blk_set_queue_dying(). This patch calls blk_freeze_queue_start() in blk_set_queue_dying(), so that we can block new I/O coming once the queue is set as dying. Given blk_set_queue_dying() is always called in remove path of block device, and queue will be cleaned up later, we don't need to worry about undoing the counter. Cc: Tejun Heo <tj@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'block/blk-core.c')
-rw-r--r--block/blk-core.c13
1 files changed, 10 insertions, 3 deletions
diff --git a/block/blk-core.c b/block/blk-core.c
index 7b66f76f9cff..43b7d06ced69 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -500,6 +500,13 @@ void blk_set_queue_dying(struct request_queue *q)
queue_flag_set(QUEUE_FLAG_DYING, q);
spin_unlock_irq(q->queue_lock);
+ /*
+ * When queue DYING flag is set, we need to block new req
+ * entering queue, so we call blk_freeze_queue_start() to
+ * prevent I/O from crossing blk_queue_enter().
+ */
+ blk_freeze_queue_start(q);
+
if (q->mq_ops)
blk_mq_wake_waiters(q);
else {
@@ -672,9 +679,9 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
/*
* read pair of barrier in blk_freeze_queue_start(),
* we need to order reading __PERCPU_REF_DEAD flag of
- * .q_usage_counter and reading .mq_freeze_depth,
- * otherwise the following wait may never return if the
- * two reads are reordered.
+ * .q_usage_counter and reading .mq_freeze_depth or
+ * queue dying flag, otherwise the following wait may
+ * never return if the two reads are reordered.
*/
smp_rmb();