summaryrefslogtreecommitdiffstats
path: root/drivers/target
diff options
context:
space:
mode:
authorMike Christie <michael.christie@oracle.com>2023-08-17 21:29:02 +0200
committerMartin K. Petersen <martin.petersen@oracle.com>2023-08-21 23:20:48 +0200
commit84c073fd89de22d5cb09edffb1f692a1964fd584 (patch)
tree747a7abdff001d5a0ff0b65584a2cad5e7193ee2 /drivers/target
parentscsi: lpfc: Do not abuse UUID APIs and LPFC_COMPRESS_VMID_SIZE (diff)
downloadlinux-84c073fd89de22d5cb09edffb1f692a1964fd584.tar.xz
linux-84c073fd89de22d5cb09edffb1f692a1964fd584.zip
scsi: target: Fix write perf due to unneeded throttling
The write back throttling (WBT) code checks if REQ_SYNC | REQ_IDLE is set to determine if a write is O_DIRECT vs buffered. If the bits are not set then it assumes it's a buffered write and will throttle LIO if we hit certain metrics. LIO itself is not using the buffer cache and is doing direct I/O, so this has us set the direct bits so we are not throttled. When the initiator application is doing direct I/O this can greatly improve performance. It depends on the backend device but we have seen where the WBT code is throttling writes to only 20K IOPs with 4K I/Os when the device can support 100K+. Signed-off-by: Mike Christie <michael.christie@oracle.com> Link: https://lore.kernel.org/r/20230817192902.346791-1-michael.christie@oracle.com Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'drivers/target')
-rw-r--r--drivers/target/target_core_iblock.c7
1 files changed, 6 insertions, 1 deletions
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 3d1b511ea284..5937a7ed6989 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -740,11 +740,16 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
if (data_direction == DMA_TO_DEVICE) {
struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
+
+ /*
+ * Set bits to indicate WRITE_ODIRECT so we are not throttled
+ * by WBT.
+ */
+ opf = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
/*
* Force writethrough using REQ_FUA if a volatile write cache
* is not enabled, or if initiator set the Force Unit Access bit.
*/
- opf = REQ_OP_WRITE;
miter_dir = SG_MITER_TO_SG;
if (bdev_fua(ib_dev->ibd_bd)) {
if (cmd->se_cmd_flags & SCF_FUA)