diff options
author | Maurizio Lombardi <mlombard@redhat.com> | 2024-10-15 13:21:00 +0200 |
---|---|---|
committer | Keith Busch <kbusch@kernel.org> | 2024-10-17 18:55:22 +0200 |
commit | 26bc0a81f64ce00fc4342c38eeb2eddaad084dd2 (patch) | |
tree | 0be152e82a7802d9a49bfda24fd9c2a1799b0cc2 /drivers/nvme/host/pci.c | |
parent | nvme-multipath: defer partition scanning (diff) | |
download | linux-26bc0a81f64ce00fc4342c38eeb2eddaad084dd2.tar.xz linux-26bc0a81f64ce00fc4342c38eeb2eddaad084dd2.zip |
nvme-pci: fix race condition between reset and nvme_dev_disable()
nvme_dev_disable() modifies the dev->online_queues field, therefore
nvme_pci_update_nr_queues() should avoid racing against it, otherwise
we could end up passing invalid values to blk_mq_update_nr_hw_queues().
WARNING: CPU: 39 PID: 61303 at drivers/pci/msi/api.c:347
pci_irq_get_affinity+0x187/0x210
Workqueue: nvme-reset-wq nvme_reset_work [nvme]
RIP: 0010:pci_irq_get_affinity+0x187/0x210
Call Trace:
<TASK>
? blk_mq_pci_map_queues+0x87/0x3c0
? pci_irq_get_affinity+0x187/0x210
blk_mq_pci_map_queues+0x87/0x3c0
nvme_pci_map_queues+0x189/0x460 [nvme]
blk_mq_update_nr_hw_queues+0x2a/0x40
nvme_reset_work+0x1be/0x2a0 [nvme]
Fix the bug by locking the shutdown_lock mutex before using
dev->online_queues. Give up if nvme_dev_disable() is running or if
it has been executed already.
Fixes: 949928c1c731 ("NVMe: Fix possible queue use after freed")
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Diffstat (limited to '')
-rw-r--r-- | drivers/nvme/host/pci.c | 19 |
1 files changed, 16 insertions, 3 deletions
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 7990c3f22ecf..4b9fda0b1d9a 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2506,17 +2506,29 @@ static unsigned int nvme_pci_nr_maps(struct nvme_dev *dev) return 1; } -static void nvme_pci_update_nr_queues(struct nvme_dev *dev) +static bool nvme_pci_update_nr_queues(struct nvme_dev *dev) { if (!dev->ctrl.tagset) { nvme_alloc_io_tag_set(&dev->ctrl, &dev->tagset, &nvme_mq_ops, nvme_pci_nr_maps(dev), sizeof(struct nvme_iod)); - return; + return true; + } + + /* Give up if we are racing with nvme_dev_disable() */ + if (!mutex_trylock(&dev->shutdown_lock)) + return false; + + /* Check if nvme_dev_disable() has been executed already */ + if (!dev->online_queues) { + mutex_unlock(&dev->shutdown_lock); + return false; } blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1); /* free previously allocated queues that are no longer usable */ nvme_free_queues(dev, dev->online_queues); + mutex_unlock(&dev->shutdown_lock); + return true; } static int nvme_pci_enable(struct nvme_dev *dev) @@ -2797,7 +2809,8 @@ static void nvme_reset_work(struct work_struct *work) nvme_dbbuf_set(dev); nvme_unquiesce_io_queues(&dev->ctrl); nvme_wait_freeze(&dev->ctrl); - nvme_pci_update_nr_queues(dev); + if (!nvme_pci_update_nr_queues(dev)) + goto out; nvme_unfreeze(&dev->ctrl); } else { dev_warn(dev->ctrl.device, "IO queues lost\n"); |