diff options
author | Jon Derrick <jonathan.derrick@intel.com> | 2015-07-21 23:08:13 +0200 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2015-07-21 23:36:24 +0200 |
commit | c45f5c9943ce0b16b299b543c2aae12408039027 (patch) | |
tree | 93839ab1a736cad935db2646e314d7260b07d6fe | |
parent | NVMe: Use CMB for the IO SQes if available (diff) | |
download | linux-c45f5c9943ce0b16b299b543c2aae12408039027.tar.xz linux-c45f5c9943ce0b16b299b543c2aae12408039027.zip |
nvme: Fixes u64 division which breaks i386 builds
Uses div_u64 for u64 division and round_down, a bitwise operation,
instead of rounddown, which uses a modulus.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
-rw-r--r-- | drivers/block/nvme-core.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c index 82b4ffb6eefa..666e994fd622 100644 --- a/drivers/block/nvme-core.c +++ b/drivers/block/nvme-core.c @@ -1454,8 +1454,9 @@ static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues, unsigned q_size_aligned = roundup(q_depth * entry_size, dev->page_size); if (q_size_aligned * nr_io_queues > dev->cmb_size) { - q_depth = rounddown(dev->cmb_size / nr_io_queues, - dev->page_size) / entry_size; + u64 mem_per_q = div_u64(dev->cmb_size, nr_io_queues); + mem_per_q = round_down(mem_per_q, dev->page_size); + q_depth = div_u64(mem_per_q, entry_size); /* * Ensure the reduced q_depth is above some threshold where it |