diff options
author | Yu Kuai <yukuai3@huawei.com> | 2023-06-06 03:14:38 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-06-07 15:51:00 +0200 |
commit | a7cfa0af0c88353b4eb59db5a2a0fbe35329b3f9 (patch) | |
tree | 9f9aa517dd1d92a702ecd776732fc5551ca6cbd4 /block/blk-ioc.c | |
parent | nbd: Add the maximum limit of allocated index in nbd_dev_add (diff) | |
download | linux-a7cfa0af0c88353b4eb59db5a2a0fbe35329b3f9.tar.xz linux-a7cfa0af0c88353b4eb59db5a2a0fbe35329b3f9.zip |
blk-ioc: fix recursive spin_lock/unlock_irq() in ioc_clear_queue()
Recursive spin_lock/unlock_irq() is not safe, because spin_unlock_irq()
will enable irq unconditionally:
spin_lock_irq queue_lock -> disable irq
spin_lock_irq ioc->lock
spin_unlock_irq ioc->lock -> enable irq
/*
* AA dead lock will be triggered if current context is preempted by irq,
* and irq try to hold queue_lock again.
*/
spin_unlock_irq queue_lock
Fix this problem by using spin_lock/unlock() directly for 'ioc->lock'.
Fixes: 5a0ac57c48aa ("blk-ioc: protect ioc_destroy_icq() by 'queue_lock'")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20230606011438.3743440-1-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-ioc.c')
-rw-r--r-- | block/blk-ioc.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/block/blk-ioc.c b/block/blk-ioc.c index d5db92e62c43..25dd4db11121 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -179,9 +179,9 @@ void ioc_clear_queue(struct request_queue *q) * Other context won't hold ioc lock to wait for queue_lock, see * details in ioc_release_fn(). */ - spin_lock_irq(&icq->ioc->lock); + spin_lock(&icq->ioc->lock); ioc_destroy_icq(icq); - spin_unlock_irq(&icq->ioc->lock); + spin_unlock(&icq->ioc->lock); } spin_unlock_irq(&q->queue_lock); } |