summaryrefslogtreecommitdiffstats
path: root/drivers/infiniband (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds2023-04-3098-5382/+7742
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull rdma updates from Jason Gunthorpe: "Usual wide collection of unrelated items in drivers: - Driver bug fixes and treewide cleanups in hfi1, siw, qib, mlx5, rxe, usnic, usnic, bnxt_re, ocrdma, iser: - remove unnecessary NULL checks - kmap obsolescence - pci_enable_pcie_error_reporting() obsolescence - unused variables and macros - trace event related warnings - casting warnings - Code cleanups for irdm and erdma - EFA reporting of 128 byte PCIe TLP support - mlx5 more agressively uses the out of order HW feature - Big rework of how state machines and tasks work in rxe - Fix a syzkaller found crash netdev refcount leak in siw - bnxt_re revises their HW description header - Congestion control for bnxt_re - Use mmu_notifiers more safely in hfi1 - mlx5 gets better support for PCIe relaxed ordering inside VMs" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (81 commits) RDMA/efa: Add rdma write capability to device caps RDMA/mlx5: Use correct device num_ports when modify DC RDMA/irdma: Drop spurious WQ_UNBOUND from alloc_ordered_workqueue() call RDMA/rxe: Fix spinlock recursion deadlock on requester RDMA/mlx5: Fix flow counter query via DEVX RDMA/rxe: Protect QP state with qp->state_lock RDMA/rxe: Move code to check if drained to subroutine RDMA/rxe: Remove qp->req.state RDMA/rxe: Remove qp->comp.state RDMA/rxe: Remove qp->resp.state RDMA/mlx5: Allow relaxed ordering read in VFs and VMs net/mlx5: Update relaxed ordering read HCA capabilities RDMA/mlx5: Check pcie_relaxed_ordering_enabled() in UMR RDMA/mlx5: Remove pcie_relaxed_ordering_enabled() check for RO write RDMA: Add ib_virt_dma_to_page() RDMA/rxe: Fix the error "trying to register non-static key in rxe_cleanup_task" RDMA/irdma: Slightly optimize irdma_form_ah_cm_frame() RDMA/rxe: Fix incorrect TASKLET_STATE_SCHED check in rxe_task.c IB/hfi1: Place struct mmu_rb_handler on cache line start IB/hfi1: Fix bugs with non-PAGE_SIZE-end multi-iovec user SDMA requests ...
| * RDMA/efa: Add rdma write capability to device capsYonatan Nachum2023-04-223-17/+43
| | | | | | | | | | | | | | | | | | | | | | | | Add rdma write capability that is propagated from the device to rdma-core. Enable MR creation with remote write permissions according to this device capability. Link: https://lore.kernel.org/r/20230404154313.35194-1-ynachum@amazon.com Reviewed-by: Firas Jahjah <firasj@amazon.com> Reviewed-by: Michael Margolin <mrgolin@amazon.com> Signed-off-by: Yonatan Nachum <ynachum@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Use correct device num_ports when modify DCMark Zhang2023-04-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Just like other QP types, when modify DC, the port_num should be compared with dev->num_ports, instead of HCA_CAP.num_ports. Otherwise Multi-port vHCA on DC may not work. Fixes: 776a3906b692 ("IB/mlx5: Add support for DC target QP") Link: https://lore.kernel.org/r/20230420013906.1244185-1-markzhang@nvidia.com Signed-off-by: Mark Zhang <markzhang@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/irdma: Drop spurious WQ_UNBOUND from alloc_ordered_workqueue() callTejun Heo2023-04-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Workqueue is in the process of cleaning up the distinction between unbound workqueues w/ @nr_active==1 and ordered workqueues. Explicit WQ_UNBOUND isn't needed for alloc_ordered_workqueue() and will trigger a warning in the future. Let's remove it. This doesn't cause any functional changes. Link: https://lore.kernel.org/r/ZEGW-IcFReR1juVM@slm.duckdns.org Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Fix spinlock recursion deadlock on requesterDaisuke Matsuda2023-04-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following deadlock is observed: Call Trace: <IRQ> _raw_spin_lock_bh+0x29/0x30 check_type_state.constprop.0+0x4e/0xc0 [rdma_rxe] rxe_rcv+0x173/0x3d0 [rdma_rxe] rxe_udp_encap_recv+0x69/0xd0 [rdma_rxe] ? __pfx_rxe_udp_encap_recv+0x10/0x10 [rdma_rxe] udp_queue_rcv_one_skb+0x258/0x520 udp_unicast_rcv_skb+0x75/0x90 __udp4_lib_rcv+0x364/0x5c0 ip_protocol_deliver_rcu+0xa7/0x160 ip_local_deliver_finish+0x73/0xa0 ip_sublist_rcv_finish+0x80/0x90 ip_sublist_rcv+0x191/0x220 ip_list_rcv+0x132/0x160 __netif_receive_skb_list_core+0x297/0x2c0 netif_receive_skb_list_internal+0x1c5/0x300 napi_complete_done+0x6f/0x1b0 virtnet_poll+0x1f4/0x2d0 [virtio_net] __napi_poll+0x2c/0x1b0 net_rx_action+0x293/0x350 ? __napi_schedule+0x79/0x90 __do_softirq+0xcb/0x2ab __irq_exit_rcu+0xb9/0xf0 common_interrupt+0x80/0xa0 </IRQ> <TASK> asm_common_interrupt+0x22/0x40 RIP: 0010:_raw_spin_lock+0x17/0x30 rxe_requester+0xe4/0x8f0 [rdma_rxe] ? xas_load+0x9/0xa0 ? xa_load+0x70/0xb0 do_task+0x64/0x1f0 [rdma_rxe] rxe_post_send+0x54/0x110 [rdma_rxe] ib_uverbs_post_send+0x5f8/0x680 [ib_uverbs] ? netif_receive_skb_list_internal+0x1e3/0x300 ib_uverbs_write+0x3c8/0x500 [ib_uverbs] vfs_write+0xc5/0x3b0 ksys_write+0xab/0xe0 ? syscall_trace_enter.constprop.0+0x126/0x1a0 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc </TASK> The deadlock is easily reproducible with perftest. Fix it by disabling softirq when acquiring the lock in process context. Fixes: f605f26ea196 ("RDMA/rxe: Protect QP state with qp->state_lock") Link: https://lore.kernel.org/r/20230418090642.1849358-1-matsuda-daisuke@fujitsu.com Signed-off-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com> Acked-by: Zhu Yanjun <zyjzyj2000@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Fix flow counter query via DEVXMark Bloch2023-04-181-5/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit cited in "fixes" tag added bulk support for flow counters but it didn't account that's also possible to query a counter using a non-base id if the counter was allocated as bulk. When a user performs a query, validate the flow counter id given in the mailbox is inside the valid range taking bulk value into account. Fixes: 208d70f562e5 ("IB/mlx5: Support flow counters offset for bulk counters") Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Link: https://lore.kernel.org/r/79d7fbe291690128e44672418934256254d93115.1681377114.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/rxe: Protect QP state with qp->state_lockBob Pearson2023-04-177-218/+317
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the rxe driver makes little effort to make the changes to qp state (which includes qp->attr.qp_state, qp->attr.sq_draining and qp->valid) atomic between different client threads and IO threads. In particular a common template is for an RDMA application to call ib_modify_qp() to move a qp to ERR state and then wait until all the packet and work queues have drained before calling ib_destroy_qp(). None of these state changes are protected by locks to assure that the changes are executed atomically and that memory barriers are included. This has been observed to lead to incorrect behavior around qp cleanup. This patch continues the work of the previous patches in this series and adds locking code around qp state changes and lookups. Link: https://lore.kernel.org/r/20230405042611.6467-5-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Move code to check if drained to subroutineBob Pearson2023-04-172-29/+38
| | | | | | | | | | | | | | | | | | | | Move two blocks of code in rxe_comp.c and rxe_req.c to subroutines that check if draining is complete in the SQD state and, if so, generate a SQ_DRAINED event. Link: https://lore.kernel.org/r/20230405042611.6467-4-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Remove qp->req.stateBob Pearson2023-04-177-66/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rxe driver has four different QP state variables, qp->attr.qp_state, qp->req.state, qp->comp.state, and qp->resp.state. All of these basically carry the same information. This patch replaces uses of qp->req.state by qp->attr.qp_state and enum rxe_qp_state. This is the third of three patches which will remove all but the qp->attr.qp_state variable. This will bring the driver closer to the IBA description. Link: https://lore.kernel.org/r/20230405042611.6467-3-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Remove qp->comp.stateBob Pearson2023-04-173-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rxe driver has four different QP state variables, qp->attr.qp_state, qp->req.state, qp->comp.state, and qp->resp.state. All of these basically carry the same information. This patch replaces uses of qp->comp.state by qp->attr.qp_state. This is the second of three patches which will remove all but the qp->attr.qp_state variable. This will bring the driver closer to the IBA description. Link: https://lore.kernel.org/r/20230405042611.6467-2-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Remove qp->resp.stateBob Pearson2023-04-176-14/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rxe driver has four different QP state variables, qp->attr.qp_state, qp->req.state, qp->comp.state, and qp->resp.state. All of these basically carry the same information. This patch replaces uses of qp->resp.state by qp->attr.qp_state. This is the first of three patches which will remove all but the qp->attr.qp_state variable. This will bring the driver closer to the IBA description. Link: https://lore.kernel.org/r/20230405042611.6467-1-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Allow relaxed ordering read in VFs and VMsAvihai Horon2023-04-163-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to PCIe spec, Enable Relaxed Ordering value in the VF's PCI config space is wired to 0 and PF relaxed ordering (RO) setting should be applied to the VF. In QEMU (and maybe others), when assigning VFs, the RO bit in PCI config space is not emulated properly and is always set to 0. Therefore, pcie_relaxed_ordering_enabled() always returns 0 for VFs and VMs and thus MKeys can't be created with RO read even if the PF supports it. pcie_relaxed_ordering_enabled() check was added to avoid a syndrome when creating a MKey with relaxed ordering (RO) enabled when the driver's relaxed_ordering_read_pci_enabled HCA capability is out of sync with FW. With the new relaxed_ordering_read capability this can't happen, as it's set regardless of RO value in PCI config space and thus can't change during runtime. Hence, to allow RO read in VFs and VMs, use the new HCA capability relaxed_ordering_read without checking pcie_relaxed_ordering_enabled(). The old capability checks are kept for backward compatibility with older FWs. Allowing RO in VFs and VMs is valuable since it can greatly improve performance on some setups. For example, testing throughput of a VF on an AMD EPYC 7763 and ConnectX-6 Dx setup showed roughly 60% performance improvement. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Aya Levin <ayal@nvidia.com> Link: https://lore.kernel.org/r/e7048640d66c341a8fa0465e099926e7989184bc.1681131553.git.leon@kernel.org Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * net/mlx5: Update relaxed ordering read HCA capabilitiesAvihai Horon2023-04-162-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename existing HCA capability relaxed_ordering_read to relaxed_ordering_read_pci_enabled. This is in accordance with recent PRM change to better describe the capability, as it's set only if both the device supports relaxed ordering (RO) read and RO is enabled in PCI config space. In addition, add new HCA capability relaxed_ordering_read which is set if the device supports RO read, regardless of RO in PCI config space. This will be used in the following patch to allow RO in VFs and VMs. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Link: https://lore.kernel.org/r/caa0002fd8135086357dfcc368e2f5cc73b08480.1681131553.git.leon@kernel.org Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/mlx5: Check pcie_relaxed_ordering_enabled() in UMRAvihai Horon2023-04-161-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | relaxed_ordering_read HCA capability is set if both the device supports relaxed ordering (RO) read and RO is set in PCI config space. RO in PCI config space can change during runtime. This will change the value of relaxed_ordering_read HCA capability in FW, but the driver will not see it since it queries the capabilities only once. This can lead to the following scenario: 1. RO in PCI config space is enabled. 2. User creates MKey without RO. 3. RO in PCI config space is disabled. As a result, relaxed_ordering_read HCA capability is turned off in FW but remains on in driver copy of the capabilities. 4. User requests to reconfig the MKey with RO via UMR. 5. Driver will try to reconfig the MKey with RO read although it shouldn't (as relaxed_ordering_read HCA capability is really off). To fix this, check pcie_relaxed_ordering_enabled() before setting RO read in UMR. Fixes: 896ec9735336 ("RDMA/mlx5: Set mkey relaxed ordering by UMR with ConnectX-7") Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Link: https://lore.kernel.org/r/8d39eb8317e7bed1a354311a20ae707788fd94ed.1681131553.git.leon@kernel.org Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/mlx5: Remove pcie_relaxed_ordering_enabled() check for RO writeAvihai Horon2023-04-161-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pcie_relaxed_ordering_enabled() check was added to avoid a syndrome when creating a MKey with relaxed ordering (RO) enabled when the driver's relaxed_ordering_{read,write} HCA capabilities are out of sync with FW. While this can happen with relaxed_ordering_read, it can't happen with relaxed_ordering_write as it's set if the device supports RO write, regardless of RO in PCI config space, and thus can't change during runtime. Therefore, drop the pcie_relaxed_ordering_enabled() check for relaxed_ordering_write while keeping it for relaxed_ordering_read. Doing so will also allow the usage of RO write in VFs and VMs (where RO in PCI config space is not reported/emulated properly). Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Link: https://lore.kernel.org/r/7e8f55e31572c1702d69cae015a395d3a824a38a.1681131553.git.leon@kernel.org Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA: Add ib_virt_dma_to_page()Jason Gunthorpe2023-04-165-27/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make it clearer what is going on by adding a function to go back from the "virtual" dma_addr to a kva and another to a struct page. This is used in the ib_uses_virt_dma() style drivers (siw, rxe, hfi, qib). Call them instead of a naked casting and virt_to_page() when working with dma_addr values encoded by the various ib_map functions. This also fixes the virt_to_page() casting problem Linus Walleij has been chasing. Cc: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/0-v2-05ea785520ed+10-ib_virt_page_jgg@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/rxe: Fix the error "trying to register non-static key in rxe_cleanup_task"Zhu Yanjun2023-04-161-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the function rxe_create_qp(), rxe_qp_from_init() is called to initialize qp, internally things like rxe_init_task are not setup until rxe_qp_init_req(). If an error occurred before this point then the unwind will call rxe_cleanup() and eventually to rxe_qp_do_cleanup()/rxe_cleanup_task() which will oops when trying to access the uninitialized spinlock. If rxe_init_task is not executed, rxe_cleanup_task will not be called. Reported-by: syzbot+cfcc1a3c85be15a40cba@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=fd85757b74b3eb59f904138486f755f71e090df8 Fixes: 8700e3e7c485 ("Soft RoCE driver") Fixes: 2d4b21e0a291 ("IB/rxe: Prevent from completer to operate on non valid QP") Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Link: https://lore.kernel.org/r/20230413101115.1366068-1-yanjun.zhu@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/irdma: Slightly optimize irdma_form_ah_cm_frame()Christophe JAILLET2023-04-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to zero 'pktsize' bytes of 'buf', only the header needs to be cleared, to be safe. All the other bytes are already written with some memcpy() at the end of the function. Doing so also gives the opportunity to the compiler to avoid the memset() call. It can be inlined now that the length is known as compile time. Link: https://lore.kernel.org/r/098e3c397be0436f1867899245ecfe656c472110.1675369386.git.christophe.jaillet@wanadoo.fr Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Fix incorrect TASKLET_STATE_SCHED check in rxe_task.cBob Pearson2023-04-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | In a previous patch TASKLET_STATE_SCHED was used as a mask but it is a bit position instead. Add the missing shift. Link: https://lore.kernel.org/r/20230329193308.7489-1-rpearsonhpe@gmail.com Reported-by: Dan Carpenter <error27@gmail.com> Link: https://lore.kernel.org/linux-rdma/8a054b78-6d50-4bc6-8d8a-83f85fbdb82f@kili.mountain/ Fixes: d94671632572 ("RDMA/rxe: Rewrite rxe_task.c") Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * IB/hfi1: Place struct mmu_rb_handler on cache line startPatrick Kelsey2023-04-092-6/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Place struct mmu_rb_handler on cache line start like so: struct mmu_rb_handler *h; void *free_ptr; int ret; free_ptr = kzalloc(sizeof(*h) + cache_line_size() - 1, GFP_KERNEL); if (!free_ptr) return -ENOMEM; h = PTR_ALIGN(free_ptr, cache_line_size()); Additionally, move struct mmu_rb_handler fields "root" and "ops_args" to start after the next cacheline using the "____cacheline_aligned_in_smp" annotation. Allocating an additional cache_line_size() - 1 bytes to place struct mmu_rb_handler on a cache line start does increase memory consumption. However, few struct mmu_rb_handler are created when hfi1 is in use. As mmu_rb_handler->root and mmu_rb_handler->ops_args are accessed frequently, the advantage of having them both within a cache line is expected to outweigh the disadvantage of the additional memory consumption per struct mmu_rb_handler. Signed-off-by: Brendan Cunningham <bcunningham@cornelisnetworks.com> Signed-off-by: Patrick Kelsey <pat.kelsey@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Link: https://lore.kernel.org/r/168088636963.3027109.16959757980497822530.stgit@252.162.96.66.static.eigbox.net Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/hfi1: Fix bugs with non-PAGE_SIZE-end multi-iovec user SDMA requestsPatrick Kelsey2023-04-0911-304/+423
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hfi1 user SDMA request processing has two bugs that can cause data corruption for user SDMA requests that have multiple payload iovecs where an iovec other than the tail iovec does not run up to the page boundary for the buffer pointed to by that iovec.a Here are the specific bugs: 1. user_sdma_txadd() does not use struct user_sdma_iovec->iov.iov_len. Rather, user_sdma_txadd() will add up to PAGE_SIZE bytes from iovec to the packet, even if some of those bytes are past iovec->iov.iov_len and are thus not intended to be in the packet. 2. user_sdma_txadd() and user_sdma_send_pkts() fail to advance to the next iovec in user_sdma_request->iovs when the current iovec is not PAGE_SIZE and does not contain enough data to complete the packet. The transmitted packet will contain the wrong data from the iovec pages. This has not been an issue with SDMA packets from hfi1 Verbs or PSM2 because they only produce iovecs that end short of PAGE_SIZE as the tail iovec of an SDMA request. Fixing these bugs exposes other bugs with the SDMA pin cache (struct mmu_rb_handler) that get in way of supporting user SDMA requests with multiple payload iovecs whose buffers do not end at PAGE_SIZE. So this commit fixes those issues as well. Here are the mmu_rb_handler bugs that non-PAGE_SIZE-end multi-iovec payload user SDMA requests can hit: 1. Overlapping memory ranges in mmu_rb_handler will result in duplicate pinnings. 2. When extending an existing mmu_rb_handler entry (struct mmu_rb_node), the mmu_rb code (1) removes the existing entry under a lock, (2) releases that lock, pins the new pages, (3) then reacquires the lock to insert the extended mmu_rb_node. If someone else comes in and inserts an overlapping entry between (2) and (3), insert in (3) will fail. The failure path code in this case unpins _all_ pages in either the original mmu_rb_node or the new mmu_rb_node that was inserted between (2) and (3). 3. In hfi1_mmu_rb_remove_unless_exact(), mmu_rb_node->refcount is incremented outside of mmu_rb_handler->lock. As a result, mmu_rb_node could be evicted by another thread that gets mmu_rb_handler->lock and checks mmu_rb_node->refcount before mmu_rb_node->refcount is incremented. 4. Related to #2 above, SDMA request submission failure path does not check mmu_rb_node->refcount before freeing mmu_rb_node object. If there are other SDMA requests in progress whose iovecs have pointers to the now-freed mmu_rb_node(s), those pointers to the now-freed mmu_rb nodes will be dereferenced when those SDMA requests complete. Fixes: 7be85676f1d1 ("IB/hfi1: Don't remove RB entry when not needed.") Fixes: 7724105686e7 ("IB/hfi1: add driver files") Signed-off-by: Brendan Cunningham <bcunningham@cornelisnetworks.com> Signed-off-by: Patrick Kelsey <pat.kelsey@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Link: https://lore.kernel.org/r/168088636445.3027109.10054635277810177889.stgit@252.162.96.66.static.eigbox.net Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/hfi1: Fix SDMA mmu_rb_node not being evicted in LRU orderPatrick Kelsey2023-04-091-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hfi1_mmu_rb_remove_unless_exact() did not move mmu_rb_node objects in mmu_rb_handler->lru_list after getting a cache hit on an mmu_rb_node. As a result, hfi1_mmu_rb_evict() was not guaranteed to evict truly least-recently used nodes. This could be a performance issue for an application when that application: - Uses some long-lived buffers frequently. - Uses a large number of buffers once. - Hits the mmu_rb_handler cache size or pinned-page limits, forcing mmu_rb_handler cache entries to be evicted. In this case, the one-time use buffers cause the long-lived buffer entries to eventually filter to the end of the LRU list where hfi1_mmu_rb_evict() will consider evicting a frequently-used long-lived entry instead of evicting one of the one-time use entries. Fix this by inserting new mmu_rb_node at the tail of mmu_rb_handler->lru_list and move mmu_rb_ndoe to the tail of mmu_rb_handler->lru_list when the mmu_rb_node is a hit in hfi1_mmu_rb_remove_unless_exact(). Change hfi1_mmu_rb_evict() to evict from the head of mmu_rb_handler->lru_list instead of the tail. Fixes: 0636e9ab8355 ("IB/hfi1: Add cache evict LRU list") Signed-off-by: Brendan Cunningham <bcunningham@cornelisnetworks.com> Signed-off-by: Patrick Kelsey <pat.kelsey@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Link: https://lore.kernel.org/r/168088635931.3027109.10423156330761536044.stgit@252.162.96.66.static.eigbox.net Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/hfi1: Suppress useless compiler warningsEhab Ababneh2023-04-091-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These warnings can cause build failure: In file included from ./include/trace/define_trace.h:102, from drivers/infiniband/hw/hfi1/trace_dbg.h:111, from drivers/infiniband/hw/hfi1/trace.h:15, from drivers/infiniband/hw/hfi1/trace.c:6: drivers/infiniband/hw/hfi1/./trace_dbg.h: In function ‘trace_event_get_offsets_hfi1_trace_template’: ./include/trace/trace_events.h:261:9: warning: function ‘trace_event_get_offsets_hfi1_trace_template’ might be a candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] struct trace_event_raw_##call __maybe_unused *entry; \ ^~~~~~~~~~~~~~~~ drivers/infiniband/hw/hfi1/./trace_dbg.h:25:1: note: in expansion of macro ‘DECLARE_EVENT_CLASS’ DECLARE_EVENT_CLASS(hfi1_trace_template, ^~~~~~~~~~~~~~~~~~~ In file included from ./include/trace/define_trace.h:102, from drivers/infiniband/hw/hfi1/trace_dbg.h:111, from drivers/infiniband/hw/hfi1/trace.h:15, from drivers/infiniband/hw/hfi1/trace.c:6: drivers/infiniband/hw/hfi1/./trace_dbg.h: In function ‘trace_event_raw_event_hfi1_trace_template’: ./include/trace/trace_events.h:386:9: warning: function ‘trace_event_raw_event_hfi1_trace_template’ might be a candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] struct trace_event_raw_##call *entry; \ ^~~~~~~~~~~~~~~~ drivers/infiniband/hw/hfi1/./trace_dbg.h:25:1: note: in expansion of macro ‘DECLARE_EVENT_CLASS’ DECLARE_EVENT_CLASS(hfi1_trace_template, ^~~~~~~~~~~~~~~~~~~ In file included from ./include/trace/define_trace.h:103, from drivers/infiniband/hw/hfi1/trace_dbg.h:111, from drivers/infiniband/hw/hfi1/trace.h:15, from drivers/infiniband/hw/hfi1/trace.c:6: drivers/infiniband/hw/hfi1/./trace_dbg.h: In function ‘perf_trace_hfi1_trace_template’: ./include/trace/perf.h:70:9: warning: function ‘perf_trace_hfi1_trace_template’ might be a candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] struct hlist_head *head; \ ^~~~~~~~~~ drivers/infiniband/hw/hfi1/./trace_dbg.h:25:1: note: in expansion of macro ‘DECLARE_EVENT_CLASS’ DECLARE_EVENT_CLASS(hfi1_trace_template, ^~~~~~~~~~~~~~~~~~~ Solution adapted here is similar to the one in fbbc95a49d5b0 Signed-off-by: Ehab Ababneh <ehab.ababneh@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Link: https://lore.kernel.org/r/168088635415.3027109.5711716700328939402.stgit@252.162.96.66.static.eigbox.net Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/hfi1: Remove trace newlinesDean Luick2023-04-095-18/+18
| | | | | | | | | | | | | | | | | | | | The hfi1_cdbg trace mechanism appends a newline. Remove trailing newlines from all format strings. Signed-off-by: Dean Luick <dean.luick@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Link: https://lore.kernel.org/r/168088634897.3027109.10401662436950683555.stgit@252.162.96.66.static.eigbox.net Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/srpt: Add a check for valid 'mad_agent' pointerSaravanan Vajravel2023-04-091-10/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When unregistering MAD agent, srpt module has a non-null check for 'mad_agent' pointer before invoking ib_unregister_mad_agent(). This check can pass if 'mad_agent' variable holds an error value. The 'mad_agent' can have an error value for a short window when srpt_add_one() and srpt_remove_one() is executed simultaneously. In srpt module, added a valid pointer check for 'sport->mad_agent' before unregistering MAD agent. This issue can hit when RoCE driver unregisters ib_device Stack Trace: ------------ BUG: kernel NULL pointer dereference, address: 000000000000004d PGD 145003067 P4D 145003067 PUD 2324fe067 PMD 0 Oops: 0002 [#1] PREEMPT SMP NOPTI CPU: 10 PID: 4459 Comm: kworker/u80:0 Kdump: loaded Tainted: P Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS 2.5.4 01/13/2020 Workqueue: bnxt_re bnxt_re_task [bnxt_re] RIP: 0010:_raw_spin_lock_irqsave+0x19/0x40 Call Trace: ib_unregister_mad_agent+0x46/0x2f0 [ib_core] IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready ? __schedule+0x20b/0x560 srpt_unregister_mad_agent+0x93/0xd0 [ib_srpt] srpt_remove_one+0x20/0x150 [ib_srpt] remove_client_context+0x88/0xd0 [ib_core] bond0: (slave p2p1): link status definitely up, 100000 Mbps full duplex disable_device+0x8a/0x160 [ib_core] bond0: active interface up! ? kernfs_name_hash+0x12/0x80 (NULL device *): Bonding Info Received: rdev: 000000006c0b8247 __ib_unregister_device+0x42/0xb0 [ib_core] (NULL device *): Master: mode: 4 num_slaves:2 ib_unregister_device+0x22/0x30 [ib_core] (NULL device *): Slave: id: 105069936 name:p2p1 link:0 state:0 bnxt_re_stopqps_and_ib_uninit+0x83/0x90 [bnxt_re] bnxt_re_alloc_lag+0x12e/0x4e0 [bnxt_re] Fixes: a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1") Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Saravanan Vajravel <saravanan.vajravel@broadcom.com> Link: https://lore.kernel.org/r/20230406042549.507328-1-saravanan.vajravel@broadcom.com Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/cm: Trace icm_send_rej event before the cm state is resetMark Zhang2023-04-091-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Trace icm_send_rej event before the cm state is reset to idle, so that correct cm state will be logged. For example when an incoming request is rejected, the old trace log was: icm_send_rej: local_id=961102742 remote_id=3829151631 state=IDLE reason=REJ_CONSUMER_DEFINED With this patch: icm_send_rej: local_id=312971016 remote_id=3778819983 state=MRA_REQ_SENT reason=REJ_CONSUMER_DEFINED Fixes: 8dc105befe16 ("RDMA/cm: Add tracepoints to track MAD send operations") Signed-off-by: Mark Zhang <markzhang@nvidia.com> Link: https://lore.kernel.org/r/20230330072351.481200-1-markzhang@nvidia.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: Enable congestion control by defaultSelvin Xavier2023-04-045-13/+222
| | | | | | | | | | | | | | | | | | | | Enable Congesion control by default. Issue FW command enable the CC during driver load and disable it during unload. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1680169540-10029-8-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDAM/bnxt_re: Use tlv apis while processing the slow path commandsSelvin Xavier2023-04-041-11/+11
| | | | | | | | | | | | | | | | | | | | | | Use the new TLV APIs for existing slow path commands. The TLV APIs will be used to populate extended headers for some of the Firmware commands, which will be introduced in the patches that follow. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1680169540-10029-7-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: RoCE slow path TLV supportSelvin Xavier2023-04-041-0/+162
| | | | | | | | | | | | | | | | | | Header file to support TLV encapsulated commands. These functions will be used by the driver in the follow up patches. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1680169540-10029-6-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: Reduce number of argumets to control path command APIsSelvin Xavier2023-04-044-138/+199
| | | | | | | | | | | | | | | | | | | | | | Reducing the number of arguments to bnxt_qplib_rcfw_send_message by enclosing all its arguments into a command message structure. Use the same struct while passing the command information to send_message. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1680169540-10029-5-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: Convert RCFW_CMD_PREP macro to static inline functionSelvin Xavier2023-04-044-69/+98
| | | | | | | | | | | | | | | | | | | | Convert RCFW_CMD_PREP macro to static inline function. Also, remove the cmd_flags passed as none of the functions are using it. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1680169540-10029-4-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: Remove HW queue mapping from RoCE DriverSelvin Xavier2023-04-043-93/+0
| | | | | | | | | | | | | | | | | | bnxt_en driver does the queue mapping for RoCE traffic. Removing the queue mapping from RoCE driver. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1680169540-10029-3-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: Update HW interface headersSelvin Xavier2023-04-042-2957/+4228
| | | | | | | | | | | | | | | | | | | | | | | | | | Updating the HW structures to the latest version. This is copied from the code maintained internally. No functionality changes in this patch. Code is re-organized to match the file maintained in the internal tree. Also, New HW interface structures are added, which will be used by the drivers in future. CC: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1680169540-10029-2-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/siw: Remove namespace check from siw_netdev_event()Tetsuo Handa2023-04-031-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syzbot is reporting that siw_netdev_event(NETDEV_UNREGISTER) cannot destroy siw_device created after unshare(CLONE_NEWNET) due to net namespace check. It seems that this check was by error there and should be removed. Reported-by: syzbot <syzbot+5e70d01ee8985ae62a3b@syzkaller.appspotmail.com> Link: https://syzkaller.appspot.com/bug?extid=5e70d01ee8985ae62a3b Suggested-by: Jason Gunthorpe <jgg@ziepe.ca> Suggested-by: Leon Romanovsky <leon@kernel.org> Fixes: bdcf26bf9b3a ("rdma/siw: network and RDMA core interface") Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Link: https://lore.kernel.org/r/a44e9ac5-44e2-d575-9e30-02483cc7ffd1@I-love.SAKURA.ne.jp Reviewed-by: Bernard Metzler <bmt@zurich.ibm.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/cma: Remove NULL check before dev_{put, hold}Yang Li2023-04-031-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The call netdev_{put, hold} of dev_{put, hold} will check NULL, so there is no need to check before using dev_{put, hold}, remove it to silence the warnings: ./drivers/infiniband/core/cma.c:713:2-9: WARNING: NULL check before dev_{put, hold} functions is not needed. ./drivers/infiniband/core/cma.c:2433:2-9: WARNING: NULL check before dev_{put, hold} functions is not needed. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=4668 Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Link: https://lore.kernel.org/r/20230331010633.63261-1-yang.lee@linux.alibaba.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/qib: Remove unused cnt variableTom Rix2023-04-031-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clang with W=1 reports drivers/infiniband/hw/qib/qib_file_ops.c:487:20: error: variable 'cnt' set but not used [-Werror,-Wunused-but-set-variable] u32 tid, ctxttid, cnt, limit, tidcnt; ^ drivers/infiniband/hw/qib/qib_file_ops.c:1771:9: error: variable 'cnt' set but not used [-Werror,-Wunused-but-set-variable] int i, cnt = 0, maxtid = ctxt_tidbase + dd->rcvtidcnt; ^ This variable is not used so remove it. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20230330235800.1845815-1-trix@redhat.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/mlx5: Remove unused num_alloc_xa_entries variableTom Rix2023-04-031-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | clang with W=1 reports drivers/infiniband/hw/mlx5/devx.c:1996:6: error: variable 'num_alloc_xa_entries' set but not used [-Werror,-Wunused-but-set-variable] int num_alloc_xa_entries = 0; ^ This variable is not used so remove it. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20230330153607.1838750-1-trix@redhat.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/iser: remove redundant new lineMax Gurtovoy2023-04-031-1/+0
| | | | | | | | | | | | | | | | | | | | This commit doesn't change any logic. Reviewed-by: Sergey Gorenko <sergeygo@nvidia.com> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Link: https://lore.kernel.org/r/20230330131333.37900-3-mgurtovoy@nvidia.com Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/iser: centralize setting desc type and done callbackMax Gurtovoy2023-04-031-7/+9
| | | | | | | | | | | | | | | | | | | | | | Move this common logic into iser_create_send_desc instead of duplicating the code. Reviewed-by: Sergey Gorenko <sergeygo@nvidia.com> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Link: https://lore.kernel.org/r/20230330131333.37900-2-mgurtovoy@nvidia.com Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * IB/iser: remove unused macrosMax Gurtovoy2023-04-031-6/+0
| | | | | | | | | | | | | | | | | | | | The removed macros are old leftovers. Reviewed-by: Sergey Gorenko <sergeygo@nvidia.com> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Link: https://lore.kernel.org/r/20230330131333.37900-1-mgurtovoy@nvidia.com Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/rxe: Clean kzalloc failure pathsLeon Romanovsky2023-03-302-23/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to print any debug messages after failure to allocate memory, because kernel will print OOM dumps anyway. Together with removal of these messages, remove useless goto jumps. Fixes: 5bf944f24129 ("RDMA/rxe: Add error messages") Reported-by: Dan Carpenter <error27@gmail.com> Link: https://lore.kernel.org/all/ea43486f-43dd-4054-b1d5-3a0d202be621@kili.mountain Link: https://lore.kernel.org/r/d3cedf723b84e73e8062a67b7489d33802bafba2.1680113597.git.leon@kernel.org Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
| * RDMA/rxe: Remove tasklet call from rxe_cq.cBob Pearson2023-03-293-33/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the tasklet call in rxe_cq.c and also the is_dying in the cq struct. There is no reason for the rxe driver to defer the call to the cq completion handler by scheduling a tasklet. rxe_cq_post() is not called in a hard irq context. The rxe driver currently is incorrect because the tasklet call is made without protecting the cq pointer with a reference from having the underlying memory freed before the deferred routine is called. Executing the comp_handler inline fixes this problem. Fixes: 8700e3e7c485 ("Soft RoCE driver") Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Link: https://lore.kernel.org/r/20230327215643.10410-1-rpearsonhpe@gmail.com Acked-by: Zhu Yanjun <zyjzyj2000@gmail.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/ocrdma: remove unused discard_cnt variableTom Rix2023-03-291-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | clang with W=1 reports drivers/infiniband/hw/ocrdma/ocrdma_verbs.c:1592:6: error: variable 'discard_cnt' set but not used [-Werror,-Wunused-but-set-variable] int discard_cnt = 0; ^ This variable is not used so remove it. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20230326120959.1351948-1-trix@redhat.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: remove unused num_srqne_processed and num_cqne_processed variablesTom Rix2023-03-291-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clang with W=1 reports drivers/infiniband/hw/bnxt_re/qplib_fp.c:303:6: error: variable 'num_srqne_processed' set but not used [-Werror,-Wunused-but-set-variable] int num_srqne_processed = 0; ^ drivers/infiniband/hw/bnxt_re/qplib_fp.c:304:6: error: variable 'num_cqne_processed' set but not used [-Werror,-Wunused-but-set-variable] int num_cqne_processed = 0; ^ These variables are not used so remove them. Signed-off-by: Tom Rix <trix@redhat.com> Link: https://lore.kernel.org/r/20230325140559.1336056-1-trix@redhat.com Acked-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/usnic: Remove redundant pci_clear_masterCai Huoqing2023-03-291-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove pci_clear_master to simplify the code, the bus-mastering is also cleared in do_pci_disable_device, like this: ./drivers/pci/pci.c:2197 static void do_pci_disable_device(struct pci_dev *dev) { u16 pci_command; pci_read_config_word(dev, PCI_COMMAND, &pci_command); if (pci_command & PCI_COMMAND_MASTER) { pci_command &= ~PCI_COMMAND_MASTER; pci_write_config_word(dev, PCI_COMMAND, pci_command); } pcibios_disable_device(dev); }. And dev->is_busmaster is set to 0 in pci_disable_device. Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev> Link: https://lore.kernel.org/r/20230323115742.13836-1-cai.huoqing@linux.dev Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/mlx5: Expand switchdev Q-counters to expose representor statisticsPatrisious Haddad2023-03-291-29/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously for switchdev only per device counters were supported. Currently we allocate counters for switchdev per port, which also includes the ports that belong to VF representors in order to expose them to users through the rdma tool, allowing the host to track the VFs statistics through their representors counters. Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Link: https://lore.kernel.org/r/ea31e1103c125cd27931ba213f307cde30d2eaed.1679566038.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/bnxt_re: Add resize_cq supportSelvin Xavier2023-03-295-0/+163
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add resize_cq verb support for user space CQs. Resize operation for kernel CQs are not supported now. Driver should free the current CQ only after user library polls for all the completions and switch to new CQ. So after the resize_cq is returned from the driver, user library polls for existing completions and store it as temporary data. Once library reaps all completions in the current CQ, it invokes the ibv_cmd_poll_cq to inform the driver about the resize_cq completion. Adding a check for user CQs in driver's poll_cq and complete the resize operation for user CQs. Updating uverbs_cmd_mask with poll_cq to support this. Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1678868215-23626-1-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
| * RDMA/erdma: Use fixed hardware page sizeCheng Xu2023-03-242-8/+13
| | | | | | | | | | | | | | | | | | | | Hardware's page size is 4096, but the kernel's page size may vary. Driver should use hardware's page size when communicating with hardware. Fixes: 155055771704 ("RDMA/erdma: Add verbs implementation") Link: https://lore.kernel.org/r/20230307102924.70577-2-chengyou@linux.alibaba.com Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Rewrite rxe_task.cBob Pearson2023-03-242-56/+218
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is a major rewrite of the tasklet routines in rxe_task.c. The main motivation for this is the realization that the code violates the safety of the qp pointer by correct reference counting. When a tasklet is scheduled from a verbs API the calling thread has a valid reference to the qp and schedules the tasklet to run at a later time carrying a pointer to the qp. Once the calling code returns however the qp can be destroyed at any time. In order to correct this a reference to the qp must be taken when the task is scheduled and held until it finishes running. This is complicated by the tasklet library not alwys running a task that is scheduled depending on whether someone else has scheduled it. This patch moves the logic for deciding whether to run or schedule a task outside of do_task() and guarantees that there is only one copy of the task scheduled or running at a time. Secondly the separate flags controlling teardown and draining of the task are included in the task state machine and all references to the state are protected by spinlocks to avoid consistency and memory barrier issues. Link: https://lore.kernel.org/r/20230304174533.11296-9-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/rxe: Make tasks schedule each otherBob Pearson2023-03-242-6/+6
| | | | | | | | | | | | | | | | | | | | | | Replace rxe_run_task() by rxe_sched_task() when tasks call each other. These are not performance critical and mainly involve error paths but they run the risk of causing deadlocks. Link: https://lore.kernel.org/r/20230304174533.11296-8-rpearsonhpe@gmail.com Signed-off-by: Ian Ziemba <ian.ziemba@hpe.com> Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>