diff options
author | Ming Lei <ming.lei@redhat.com> | 2021-11-09 08:11:41 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2021-11-09 16:14:27 +0100 |
commit | 9ef4d0209cbadb63656a7aa29fde49c27ab2b9bf (patch) | |
tree | e65be820c16cf45837298aeed69984f86cf05aaa /block | |
parent | blk-mq: don't free tags if the tag_set is used by other device in queue initi... (diff) | |
download | linux-9ef4d0209cbadb63656a7aa29fde49c27ab2b9bf.tar.xz linux-9ef4d0209cbadb63656a7aa29fde49c27ab2b9bf.zip |
blk-mq: add one API for waiting until quiesce is done
Some drivers(NVMe, SCSI) need to call quiesce and unquiesce in pair, but it
is hard to switch to this style, so these drivers need one atomic flag for
helping to balance quiesce and unquiesce.
When quiesce is in-progress, the driver still needs to wait until
the quiesce is done, so add API of blk_mq_wait_quiesce_done() for
these drivers.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20211109071144.181581-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r-- | block/blk-mq.c | 28 |
1 files changed, 20 insertions, 8 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index 5a9cd9fe8da3..d3e5fcbc943b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -251,22 +251,18 @@ void blk_mq_quiesce_queue_nowait(struct request_queue *q) EXPORT_SYMBOL_GPL(blk_mq_quiesce_queue_nowait); /** - * blk_mq_quiesce_queue() - wait until all ongoing dispatches have finished + * blk_mq_wait_quiesce_done() - wait until in-progress quiesce is done * @q: request queue. * - * Note: this function does not prevent that the struct request end_io() - * callback function is invoked. Once this function is returned, we make - * sure no dispatch can happen until the queue is unquiesced via - * blk_mq_unquiesce_queue(). + * Note: it is driver's responsibility for making sure that quiesce has + * been started. */ -void blk_mq_quiesce_queue(struct request_queue *q) +void blk_mq_wait_quiesce_done(struct request_queue *q) { struct blk_mq_hw_ctx *hctx; unsigned int i; bool rcu = false; - blk_mq_quiesce_queue_nowait(q); - queue_for_each_hw_ctx(q, hctx, i) { if (hctx->flags & BLK_MQ_F_BLOCKING) synchronize_srcu(hctx->srcu); @@ -276,6 +272,22 @@ void blk_mq_quiesce_queue(struct request_queue *q) if (rcu) synchronize_rcu(); } +EXPORT_SYMBOL_GPL(blk_mq_wait_quiesce_done); + +/** + * blk_mq_quiesce_queue() - wait until all ongoing dispatches have finished + * @q: request queue. + * + * Note: this function does not prevent that the struct request end_io() + * callback function is invoked. Once this function is returned, we make + * sure no dispatch can happen until the queue is unquiesced via + * blk_mq_unquiesce_queue(). + */ +void blk_mq_quiesce_queue(struct request_queue *q) +{ + blk_mq_quiesce_queue_nowait(q); + blk_mq_wait_quiesce_done(q); +} EXPORT_SYMBOL_GPL(blk_mq_quiesce_queue); /* |