diff options
author | Jianchao Wang <jianchao.w.wang@oracle.com> | 2018-08-09 16:34:17 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-08-09 16:34:17 +0200 |
commit | d263ed9926823c462f99a7679e18f0c9e5b8550d (patch) | |
tree | cffbdef4fa64044fe0bdb356d2412062ab9160e6 /block/blk-mq.c | |
parent | block: bvec_nr_vecs() returns value for wrong slab (diff) | |
download | linux-d263ed9926823c462f99a7679e18f0c9e5b8550d.tar.xz linux-d263ed9926823c462f99a7679e18f0c9e5b8550d.zip |
blk-mq: count the hctx as active before allocating tag
Currently, we count the hctx as active after allocate driver tag
successfully. If a previously inactive hctx try to get tag first
time, it may fails and need to wait. However, due to the stale tag
->active_queues, the other shared-tags users are still able to
occupy all driver tags while there is someone waiting for tag.
Consequently, even if the previously inactive hctx is waked up, it
still may not be able to get a tag and could be starved.
To fix it, we count the hctx as active before try to allocate driver
tag, then when it is waiting the tag, the other shared-tag users
will reserve budget for it.
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r-- | block/blk-mq.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index e13bdc2707ce..5efd789910e2 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -285,7 +285,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->tag = -1; rq->internal_tag = tag; } else { - if (blk_mq_tag_busy(data->hctx)) { + if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) { rq_flags = RQF_MQ_INFLIGHT; atomic_inc(&data->hctx->nr_active); } @@ -367,6 +367,8 @@ static struct request *blk_mq_get_request(struct request_queue *q, if (!op_is_flush(op) && e->type->ops.mq.limit_depth && !(data->flags & BLK_MQ_REQ_RESERVED)) e->type->ops.mq.limit_depth(op, data); + } else { + blk_mq_tag_busy(data->hctx); } tag = blk_mq_get_tag(data); @@ -971,6 +973,7 @@ bool blk_mq_get_driver_tag(struct request *rq) .hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu), .flags = BLK_MQ_REQ_NOWAIT, }; + bool shared; if (rq->tag != -1) goto done; @@ -978,9 +981,10 @@ bool blk_mq_get_driver_tag(struct request *rq) if (blk_mq_tag_is_reserved(data.hctx->sched_tags, rq->internal_tag)) data.flags |= BLK_MQ_REQ_RESERVED; + shared = blk_mq_tag_busy(data.hctx); rq->tag = blk_mq_get_tag(&data); if (rq->tag >= 0) { - if (blk_mq_tag_busy(data.hctx)) { + if (shared) { rq->rq_flags |= RQF_MQ_INFLIGHT; atomic_inc(&data.hctx->nr_active); } |