summaryrefslogtreecommitdiffstats
path: root/block/blk-mq.c
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2019-04-30 03:52:24 +0200
committerJens Axboe <axboe@kernel.dk>2019-05-04 15:24:04 +0200
commitfbc2a15e3433058582e5635aabe48a3011a644a8 (patch)
tree0d65a92b4719bfb308bc0b52744296c089a906ef /block/blk-mq.c
parentblk-mq: grab .q_usage_counter when queuing request from plug code path (diff)
downloadlinux-fbc2a15e3433058582e5635aabe48a3011a644a8.tar.xz
linux-fbc2a15e3433058582e5635aabe48a3011a644a8.zip
blk-mq: move cancel of requeue_work into blk_mq_release
With holding queue's kobject refcount, it is safe for driver to schedule requeue. However, blk_mq_kick_requeue_list() may be called after blk_sync_queue() is done because of concurrent requeue activities, then requeue work may not be completed when freeing queue, and kernel oops is triggered. So moving the cancel of requeue_work into blk_mq_release() for avoiding race between requeue and freeing queue. Cc: Dongli Zhang <dongli.zhang@oracle.com> Cc: James Smart <james.smart@broadcom.com> Cc: Bart Van Assche <bart.vanassche@wdc.com> Cc: linux-scsi@vger.kernel.org, Cc: Martin K . Petersen <martin.petersen@oracle.com>, Cc: Christoph Hellwig <hch@lst.de>, Cc: James E . J . Bottomley <jejb@linux.vnet.ibm.com>, Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: James Smart <james.smart@broadcom.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r--block/blk-mq.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c9bf9b92d2db..741cf8d55e9c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2635,6 +2635,8 @@ void blk_mq_release(struct request_queue *q)
struct blk_mq_hw_ctx *hctx;
unsigned int i;
+ cancel_delayed_work_sync(&q->requeue_work);
+
/* hctx kobj stays in hctx */
queue_for_each_hw_ctx(q, hctx, i) {
if (!hctx)