diff options
author | Guangguan Wang <guangguan.wang@linux.alibaba.com> | 2022-05-16 07:51:37 +0200 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2022-05-18 02:34:12 +0200 |
commit | 793a7df63071eb09e5b88addf2a569d7bfd3c973 (patch) | |
tree | c630bccadfc8439ab45543201cf3af316857a7fa /net/smc/smc_ism.c | |
parent | net/smc: send cdc msg inline if qp has sufficient inline space (diff) | |
download | linux-793a7df63071eb09e5b88addf2a569d7bfd3c973.tar.xz linux-793a7df63071eb09e5b88addf2a569d7bfd3c973.zip |
net/smc: rdma write inline if qp has sufficient inline space
Rdma write with inline flag when sending small packages,
whose length is shorter than the qp's max_inline_data, can
help reducing latency.
In my test environment, which are 2 VMs running on the same
physical host and whose NICs(ConnectX-4Lx) are working on
SR-IOV mode, qperf shows 0.5us-0.7us improvement in latency.
Test command:
server: smc_run taskset -c 1 qperf
client: smc_run taskset -c 1 qperf <server ip> -oo \
msg_size:1:2K:*2 -t 30 -vu tcp_lat
The results shown below:
msgsize before after
1B 11.2 us 10.6 us (-0.6 us)
2B 11.2 us 10.7 us (-0.5 us)
4B 11.3 us 10.7 us (-0.6 us)
8B 11.2 us 10.6 us (-0.6 us)
16B 11.3 us 10.7 us (-0.6 us)
32B 11.3 us 10.6 us (-0.7 us)
64B 11.2 us 11.2 us (0 us)
128B 11.2 us 11.2 us (0 us)
256B 11.2 us 11.2 us (0 us)
512B 11.4 us 11.3 us (-0.1 us)
1KB 11.4 us 11.5 us (0.1 us)
2KB 11.5 us 11.5 us (0 us)
Signed-off-by: Guangguan Wang <guangguan.wang@linux.alibaba.com>
Reviewed-by: Tony Lu <tonylu@linux.alibaba.com>
Tested-by: kernel test robot <lkp@intel.com>
Acked-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/smc/smc_ism.c')
0 files changed, 0 insertions, 0 deletions