diff options
author | Jens Axboe <axboe@fb.com> | 2017-01-27 09:00:47 +0100 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2017-01-27 17:03:14 +0100 |
commit | bd6737f1ae92e2f1c6e8362efe96dbe7f18fa07d (patch) | |
tree | ffed03cc3bd01143a8e43d6daca2288836a4a9e3 /block/blk-mq.h | |
parent | block: add a op_is_flush helper (diff) | |
download | linux-bd6737f1ae92e2f1c6e8362efe96dbe7f18fa07d.tar.xz linux-bd6737f1ae92e2f1c6e8362efe96dbe7f18fa07d.zip |
blk-mq-sched: add flush insertion into blk_mq_sched_insert_request()
Instead of letting the caller check this and handle the details
of inserting a flush request, put the logic in the scheduler
insertion function. This fixes direct flush insertion outside
of the usual make_request_fn calls, like from dm via
blk_insert_cloned_request().
Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'block/blk-mq.h')
-rw-r--r-- | block/blk-mq.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/block/blk-mq.h b/block/blk-mq.h index 077a4003f1fd..57cdbf6c0cee 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -34,6 +34,8 @@ void blk_mq_wake_waiters(struct request_queue *q); bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *, struct list_head *); void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list); bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx); +bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx, + bool wait); /* * Internal helpers for allocating/freeing the request map |