summaryrefslogtreecommitdiffstats
path: root/drivers/infiniband/hw (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2020-11-2813-93/+98
|\ | | | | | | | | | | | | | | | | Trivial conflict in CAN, keep the net-next + the byteswap wrapper. Conflicts: drivers/net/can/usb/gs_usb.c Signed-off-by: Jakub Kicinski <kuba@kernel.org>
| * RDMA/hns: Bugfix for memory window mtpt configurationYixian Liu2020-11-261-0/+1
| | | | | | | | | | | | | | | | | | | | | | When a memory window is bound to a memory region, the local write access should be set for its mtpt table. Fixes: c7c28191408b ("RDMA/hns: Add MW support for hip08") Link: https://lore.kernel.org/r/1606386372-21094-1-git-send-email-liweihang@huawei.com Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Fix retry_cnt and rnr_cnt when querying QPWenpeng Liang2020-11-261-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | The maximum number of retransmission should be returned when querying QP, not the value of retransmission counter. Fixes: 99fcf82521d9 ("RDMA/hns: Fix the wrong value of rnr_retry when querying qp") Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1606382977-21431-1-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Fix wrong field of SRQ number the device supportsWenpeng Liang2020-11-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | The SRQ capacity is got from the firmware, whose field should be ended at bit 19. Fixes: ba6bb7e97421 ("RDMA/hns: Add interfaces to get pf capabilities from firmware") Link: https://lore.kernel.org/r/1606382812-23636-1-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * IB/hfi1: Ensure correct mm is used at all timesDennis Dalessandro2020-11-268-49/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two earlier bug fixes have created a security problem in the hfi1 driver. One fix aimed to solve an issue where current->mm was not valid when closing the hfi1 cdev. It attempted to do this by saving a cached value of the current->mm pointer at file open time. This is a problem if another process with access to the FD calls in via write() or ioctl() to pin pages via the hfi driver. The other fix tried to solve a use after free by taking a reference on the mm. To fix this correctly we use the existing cached value of the mm in the mmu notifier. Now we can check in the insert, evict, etc. routines that current->mm matched what the notifier was registered for. If not, then don't allow access. The register of the mmu notifier will save the mm pointer. Since in do_exit() the exit_mm() is called before exit_files(), which would call our close routine a reference is needed on the mm. We rely on the mmgrab done by the registration of the notifier, whereas before it was explicit. The mmu notifier deregistration happens when the user context is torn down, the creation of which triggered the registration. Also of note is we do not do any explicit work to protect the interval tree notifier. It doesn't seem that this is going to be needed since we aren't actually doing anything with current->mm. The interval tree notifier stuff still has a FIXME noted from a previous commit that will be addressed in a follow on patch. Cc: <stable@vger.kernel.org> Fixes: e0cf75deab81 ("IB/hfi1: Fix mm_struct use after free") Fixes: 3faa3d9a308e ("IB/hfi1: Make use of mm consistent") Link: https://lore.kernel.org/r/20201125210112.104301.51331.stgit@awfm-01.aw.intel.com Suggested-by: Jann Horn <jannh@google.com> Reported-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/i40iw: Address an mmap handler exploit in i40iwShiraz Saleem2020-11-252-35/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | i40iw_mmap manipulates the vma->vm_pgoff to differentiate a push page mmap vs a doorbell mmap, and uses it to compute the pfn in remap_pfn_range without any validation. This is vulnerable to an mmap exploit as described in: https://lore.kernel.org/r/20201119093523.7588-1-zhudi21@huawei.com The push feature is disabled in the driver currently and therefore no push mmaps are issued from user-space. The feature does not work as expected in the x722 product. Remove the push module parameter and all VMA attribute manipulations for this feature in i40iw_mmap. Update i40iw_mmap to only allow DB user mmapings at offset = 0. Check vm_pgoff for zero and if the mmaps are bound to a single page. Cc: <stable@kernel.org> Fixes: d37498417947 ("i40iw: add files for iwarp interface") Link: https://lore.kernel.org/r/20201125005616.1800-2-shiraz.saleem@intel.com Reported-by: Di Zhu <zhudi21@huawei.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * IB/mthca: fix return value of error branch in mthca_init_cq()Xiongfeng Wang2020-11-231-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | We return 'err' in the error branch, but this variable may be set as zero by the above code. Fix it by setting 'err' as a negative value before we goto the error label. Fixes: 74c2174e7be5 ("IB uverbs: add mthca user CQ support") Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Link: https://lore.kernel.org/r/1605837422-42724-1-git-send-email-wangxiongfeng2@huawei.com Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | Merge https://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski2020-11-202-2/+3
|\| | | | | | | Signed-off-by: Jakub Kicinski <kuba@kernel.org>
| * IB/hfi1: Fix error return code in hfi1_init_dd()Zhang Changzhong2020-11-131-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Fix to return a negative error code from the error handling case instead of 0, as done elsewhere in this function. Fixes: 4730f4a6c6b2 ("IB/hfi1: Activate the dummy netdev") Link: https://lore.kernel.org/r/1605249747-17942-1-git-send-email-zhangchangzhong@huawei.com Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Acked-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/pvrdma: Fix missing kfree() in pvrdma_register_device()Qinglang Miao2020-11-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | Fix missing kfree in pvrdma_register_device() when failure from ib_device_set_netdev(). Fixes: 4b38da75e089 ("RDMA/drivers: Convert easy drivers to use ib_device_set_netdev()") Link: https://lore.kernel.org/r/20201111032202.17925-1-miaoqinglang@huawei.com Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Qinglang Miao <miaoqinglang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | IB/hfi1: switch to core handling of rx/tx byte/packet countersHeiner Kallweit2020-11-124-43/+5
|/ | | | | | | | | | Use netdev->tstats instead of a member of hfi1_ipoib_dev_priv for storing a pointer to the per-cpu counters. This allows us to use core functionality for statistics handling. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Acked-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* RDMA/vmw_pvrdma: Fix the active_speed and phys_state valueAdit Ranadive2020-11-031-1/+1
| | | | | | | | | | | The pvrdma_port_attr structure is ABI toward the hypervisor, changing it breaks the ability to report the speed properly. Revert the change to u16. Fixes: 376ceb31ff87 ("RDMA: Fix link active_speed size") Link: https://lore.kernel.org/r/20201102225437.26557-1-aditr@vmware.com Reviewed-by: Vishnu Dasa <vdasa@vmware.com> Signed-off-by: Adit Ranadive <aditr@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/qedr: Fix memory leak in iWARP CMAlok Prasad2020-10-281-0/+1
| | | | | | | | | | | Fixes memory leak in iWARP CM Fixes: e411e0587e0d ("RDMA/qedr: Add iWARP connection management functions") Link: https://lore.kernel.org/r/20201021115008.28138-1-palok@marvell.com Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Alok Prasad <palok@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/mlx5: Fix devlink deadlock on net namespace deletionParav Pandit2020-10-261-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a mlx5 core devlink instance is reloaded in different net namespace, its associated IB device is deleted and recreated. Example sequence is: $ ip netns add foo $ devlink dev reload pci/0000:00:08.0 netns foo $ ip netns del foo mlx5 IB device needs to attach and detach the netdevice to it through the netdev notifier chain during load and unload sequence. A below call graph of the unload flow. cleanup_net() down_read(&pernet_ops_rwsem); <- first sem acquired ops_pre_exit_list() pre_exit() devlink_pernet_pre_exit() devlink_reload() mlx5_devlink_reload_down() mlx5_unload_one() [...] mlx5_ib_remove() mlx5_ib_unbind_slave_port() mlx5_remove_netdev_notifier() unregister_netdevice_notifier() down_write(&pernet_ops_rwsem);<- recurrsive lock Hence, when net namespace is deleted, mlx5 reload results in deadlock. When deadlock occurs, devlink mutex is also held. This not only deadlocks the mlx5 device under reload, but all the processes which attempt to access unrelated devlink devices are deadlocked. Hence, fix this by mlx5 ib driver to register for per net netdev notifier instead of global one, which operats on the net namespace without holding the pernet_ops_rwsem. Fixes: 4383cfcc65e7 ("net/mlx5: Add devlink reload") Link: https://lore.kernel.org/r/20201026134359.23150-1-parav@nvidia.com Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds2020-10-17104-1877/+2624
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull rdma updates from Jason Gunthorpe: "A usual cycle for RDMA with a typical mix of driver and core subsystem updates: - Driver minor changes and bug fixes for mlx5, efa, rxe, vmw_pvrdma, hns, usnic, qib, qedr, cxgb4, hns, bnxt_re - Various rtrs fixes and updates - Bug fix for mlx4 CM emulation for virtualization scenarios where MRA wasn't working right - Use tracepoints instead of pr_debug in the CM code - Scrub the locking in ucma and cma to close more syzkaller bugs - Use tasklet_setup in the subsystem - Revert the idea that 'destroy' operations are not allowed to fail at the driver level. This proved unworkable from a HW perspective. - Revise how the umem API works so drivers make fewer mistakes using it - XRC support for qedr - Convert uverbs objects RWQ and MW to new the allocation scheme - Large queue entry sizes for hns - Use hmm_range_fault() for mlx5 On Demand Paging - uverbs APIs to inspect the GID table instead of sysfs - Move some of the RDMA code for building large page SGLs into lib/scatterlist" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (191 commits) RDMA/ucma: Fix use after free in destroy id flow RDMA/rxe: Handle skb_clone() failure in rxe_recv.c RDMA/rxe: Move the definitions for rxe_av.network_type to uAPI RDMA: Explicitly pass in the dma_device to ib_register_device lib/scatterlist: Do not limit max_segment to PAGE_ALIGNED values IB/mlx4: Convert rej_tmout radix-tree to XArray RDMA/rxe: Fix bug rejecting all multicast packets RDMA/rxe: Fix skb lifetime in rxe_rcv_mcast_pkt() RDMA/rxe: Remove duplicate entries in struct rxe_mr IB/hfi,rdmavt,qib,opa_vnic: Update MAINTAINERS IB/rdmavt: Fix sizeof mismatch MAINTAINERS: CISCO VIC LOW LATENCY NIC DRIVER RDMA/bnxt_re: Fix sizeof mismatch for allocation of pbl_tbl. RDMA/bnxt_re: Use rdma_umem_for_each_dma_block() RDMA/umem: Move to allocate SG table from pages lib/scatterlist: Add support in dynamic allocation of SG table from pages tools/testing/scatterlist: Show errors in human readable form tools/testing/scatterlist: Rejuvenate bit-rotten test RDMA/ipoib: Set rtnl_link_ops for ipoib interfaces RDMA/uverbs: Expose the new GID query API to user space ...
| * RDMA: Explicitly pass in the dma_device to ib_register_deviceJason Gunthorpe2020-10-1612-14/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code in setup_dma_device has become rather convoluted, move all of this to the drivers. Drives now pass in a DMA capable struct device which will be used to setup DMA, or drivers must fully configure the ibdev for DMA and pass in NULL. Other than setting the masks in rvt all drivers were doing this already anyhow. mthca, mlx4 and mlx5 were already setting up maximum DMA segment size for DMA based on their hardweare limits in: __mthca_init_one() dma_set_max_seg_size (1G) __mlx4_init_one() dma_set_max_seg_size (1G) mlx5_pci_init() set_dma_caps() dma_set_max_seg_size (2G) Other non software drivers (except usnic) were extended to UINT_MAX [1, 2] instead of 2G as was before. [1] https://lore.kernel.org/linux-rdma/20200924114940.GE9475@nvidia.com/ [2] https://lore.kernel.org/linux-rdma/20200924114940.GE9475@nvidia.com/ Link: https://lore.kernel.org/r/20201008082752.275846-1-leon@kernel.org Link: https://lore.kernel.org/r/6b2ed339933d066622d5715903870676d8cc523a.1602590106.git.mchehab+huawei@kernel.org Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * IB/mlx4: Convert rej_tmout radix-tree to XArrayHåkon Bugge2020-10-092-49/+51
| | | | | | | | | | | | | | | | | | Was missed during the initial review of the below patch Fixes: 227a0e142e37 ("IB/mlx4: Add support for REJ due to timeout") Link: https://lore.kernel.org/r/1602253482-6718-1-git-send-email-haakon.bugge@oracle.com Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/bnxt_re: Fix sizeof mismatch for allocation of pbl_tbl.Colin Ian King2020-10-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | An incorrect sizeof is being used, u64 * is not correct, it should be just u64 for a table of umem_pgs number of u64 items in the pbl_tbl. Use the idiom sizeof(*pbl_tbl) to get the object type without the need to explicitly use u64. Link: https://lore.kernel.org/r/20201006114700.537916-1-colin.king@canonical.com Addresses-Coverity: ("Sizeof not portable (SIZEOF_MISMATCH)") Fixes: 1ac5a4047975 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/bnxt_re: Use rdma_umem_for_each_dma_block()Jason Gunthorpe2020-10-063-29/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This driver is taking the SGL out of the umem and passing it through a struct bnxt_qplib_sg_info. Instead of passing the SGL pass the umem and then use rdma_umem_for_each_dma_block() directly. Move the calls of ib_umem_num_dma_blocks() closer to their actual point of use, npages is only set for non-umem pbl flows. Link: https://lore.kernel.org/r/0-v1-b37437a73f35+49c-bnxt_re_dma_block_jgg@nvidia.com Acked-by: Selvin Xavier <selvin.xavier@broadcom.com> Tested-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/core: Modify enum ib_gid_type and enum rdma_network_typeAvihai Horon2020-10-023-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Separate IB_GID_TYPE_IB and IB_GID_TYPE_ROCE to two different values, so enum ib_gid_type will match the gid types of the new query GID table API which will be introduced in the following patches. This change in enum ib_gid_type requires to change also enum rdma_network_type by separating RDMA_NETWORK_IB and RDMA_NETWORK_ROCE_V1 values. Link: https://lore.kernel.org/r/20200923165015.2491894-3-leon@kernel.org Signed-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/qedr: Endianness warnings cleanupAlok Prasad2020-10-021-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Making a change to fix following sparse warnings reported by kbuild bot. CHECK drivers/infiniband/hw/qedr/verbs.c drivers/infiniband/hw/qedr/verbs.c:3872:59: warning: incorrect type in assignment (different base types) drivers/infiniband/hw/qedr/verbs.c:3872:59: expected restricted __le32 [usertype] sge_prod drivers/infiniband/hw/qedr/verbs.c:3872:59: got unsigned int [usertype] sge_prod drivers/infiniband/hw/qedr/verbs.c:3875:59: warning: incorrect type in assignment (different base types) drivers/infiniband/hw/qedr/verbs.c:3875:59: expected restricted __le32 [usertype] wqe_prod drivers/infiniband/hw/qedr/verbs.c:3875:59: got unsigned int [usertype] wqe_prod Link: https://lore.kernel.org/r/20201001100959.19940-1-palok@marvell.com Reported-by: kbuild test robot <lkp@intel.com> Fixes: acca72e2b031 ("RDMA/qedr: SRQ's bug fixes") Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Alok Prasad <palok@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Sync device with CPU pages upon ODP MR registrationYishai Hadas2020-10-013-6/+32
| | | | | | | | | | | | | | | | | | | | | | Sync device with CPU pages upon ODP MR registration. mlx5 already has to zero the HW's version of the PAS list, may as well deliver a PAS list that matches the current CPU page tables configuration. Link: https://lore.kernel.org/r/20200930163828.1336747-5-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Extend advice MR to support non faulting modeYishai Hadas2020-10-012-2/+8
| | | | | | | | | | | | | | | | | | | | Extend advice MR to support non faulting mode, this can improve performance by increasing the populated page tables in the device. Link: https://lore.kernel.org/r/20200930163828.1336747-4-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * IB/core: Enable ODP sync without faultingYishai Hadas2020-10-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable ODP sync without faulting, this improves performance by reducing the number of page faults in the system. The gain from this option is that the device page table can be aligned with the presented pages in the CPU page table without causing page faults. As of that, the overhead on data path from hardware point of view to trigger a fault which end-up by calling the driver to bring the pages will be dropped. Link: https://lore.kernel.org/r/20200930163828.1336747-3-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * IB/core: Improve ODP to use hmm_range_fault()Yishai Hadas2020-10-011-17/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move to use hmm_range_fault() instead of get_user_pags_remote() to improve performance in a few aspects: This includes: - Dropping the need to allocate and free memory to hold its output - No need any more to use put_page() to unpin the pages - The logic to detect contiguous pages is done based on the returned order, no need to run per page and evaluate. In addition, moving to use hmm_range_fault() enables to reduce page faults in the system with it's snapshot mode, this will be introduced in next patches from this series. As part of this, cleanup some flows and use the required data structures to work with hmm_range_fault(). Link: https://lore.kernel.org/r/20200930163828.1336747-2-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Remove unused variables and definitionsLang Cheng2020-09-292-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some code was removed but the variables were still there, and some parameters have been changed to be queried from firmware. So the definitions of them are no longer needed. Fixes: 2a3d923f8730 ("RDMA/hns: Replace magic numbers with #defines") Fixes: 82e620d9c3a0 ("RDMA/hns: Modify the data structure of hns_roce_av") Fixes: 82547469782a ("IB/hns: Implement the add_gid/del_gid and optimize the GIDs management") Fixes: 21b97f538765 ("RDMA/hns: Fixup qp release bug") Link: https://lore.kernel.org/r/1601371934-40003-1-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/i40iw: Remove intermediate pointer that points to the same structLeon Romanovsky2020-09-291-6/+3
| | | | | | | | | | | | | | | | | | | | There is no real need to have an intermediate pointer for the same struct, remove it, and use struct directly. Link: https://lore.kernel.org/r/20200926102450.2966017-11-leon@kernel.org Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mthca: Combine special QP struct with mthca QPLeon Romanovsky2020-09-294-58/+59
| | | | | | | | | | | | | | | | | | | | | | As preparation for the removal of QP allocation logic, we need to ensure that ib_core allocates the right amount of memory before a call to the driver create_qp(). It requires from driver to have the same structs for all types of QPs. Link: https://lore.kernel.org/r/20200926102450.2966017-10-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/drivers: Remove udata check from special QPLeon Romanovsky2020-09-296-68/+19
| | | | | | | | | | | | | | | | | | | | GSI QP can't be created from the user space, hence the udata check is always false (udata == NULL). Remove that check and simplify the flow. Link: https://lore.kernel.org/r/20200926102450.2966017-9-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx4: Prepare QP allocation to remove from the driverLeon Romanovsky2020-09-291-94/+63
| | | | | | | | | | | | | | | | | | | | | | Since all mlx4 QP have same storage type, move the QP allocation to be in one place. This change is preparation to removal of such allocation from the driver. Link: https://lore.kernel.org/r/20200926102450.2966017-7-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx4: Embed GSI QP into general mlx4_ib QPLeon Romanovsky2020-09-292-91/+100
| | | | | | | | | | | | | | | | | | | | | | Refactor the storage struct of mlx4 GSI QP to be embedded in mlx4_ib QP. This allows to remove internal memory allocation of QP struct which is hidden inside the mlx4_ib_create_qp() flow. Link: https://lore.kernel.org/r/20200926102450.2966017-6-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Delete not needed GSI QP signal QP typeLeon Romanovsky2020-09-292-8/+1
| | | | | | | | | | | | | | | | | | | | | | GSI QP doesn't need signal QP type because it is initialized statically to zero, which is IB_SIGNAL_ALL_WR also wr->send_flags isn't set too. This means that the GSI QP signal QP type can be removed. Link: https://lore.kernel.org/r/20200926102450.2966017-5-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Change GSI QP to have same creation flow like other QPsLeon Romanovsky2020-09-293-46/+38
| | | | | | | | | | | | | | | | | | | | | | There is no reason to have separate create flow for the GSI QP, while general create_qp routine has all needed checks and ability to allocate and free the proper struct mlx5_ib_qp. Link: https://lore.kernel.org/r/20200926102450.2966017-4-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Reuse existing fields in parent QP storage objectLeon Romanovsky2020-09-293-45/+31
| | | | | | | | | | | | | | | | | | | | | | Remove duplication of mlx5_ib_qp and mlx5_ib_gsi_qp fields. This change returns the memory footprint of mlx5_ib QP to be as it was before embedding GSI QP. Link: https://lore.kernel.org/r/20200926102450.2966017-3-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Embed GSI QP into general mlx5_ib QPLeon Romanovsky2020-09-293-33/+32
| | | | | | | | | | | | | | | | | | | | | | The GSI QPs have different create flow from the regular QPs, but it is not really needed. Update the code to use mlx5_ib_qp as a storage class for all outside of GSI calls. Link: https://lore.kernel.org/r/20200926102450.2966017-2-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/mlx5: Fix type warning of sizeof in __mlx5_ib_alloc_counters()Liu Shixin2020-09-251-2/+2
| | | | | | | | | | | | | | | | | | | | | | sizeof() when applied to a pointer typed expression should give the size of the pointed data, even if the data is a pointer. Fixes: e1f24a79f424 ("IB/mlx5: Support congestion related counters") Link: https://lore.kernel.org/r/20200917081354.2083293-1-liushixin2@huawei.com Signed-off-by: Liu Shixin <liushixin2@huawei.com> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Support inline data in extented sge space for RCWeihang Li2020-09-244-52/+162
| | | | | | | | | | | | | | | | | | | | | | HIP08 supports RC inline up to size of 32 Bytes, and all data should be put into SQWQE. For HIP09, this capability is extended to 1024 Bytes, if length of data is longer than 32 Bytes, they will be filled into extended sge space. Link: https://lore.kernel.org/r/1599744069-9968-1-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Fix missing sq_sig_type when querying QPWeihang Li2020-09-241-0/+1
| | | | | | | | | | | | | | | | | | | | The sq_sig_type field should be filled when querying QP, or the users may get a wrong value. Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1600509802-44382-9-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Fix configuration of ack_req_freq in QPCWeihang Li2020-09-241-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | The hardware will add AckReq flag in BTH header according to the value of ack_req_freq to request ACK from responder for the packets with this flag. It should be greater than or equal to lp_pktn_ini instead of using a fixed value. Fixes: 7b9bd73ed13d ("RDMA/hns: Fix wrong assignment of lp_pktn_ini in QPC") Link: https://lore.kernel.org/r/1600509802-44382-8-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Fix the wrong value of rnr_retry when querying qpWenpeng Liang2020-09-241-1/+3
| | | | | | | | | | | | | | | | | | | | | | The rnr_retry returned to the user is not correct, it should be got from another fields in QPC. Fixes: bfe860351e31 ("RDMA/hns: Fix cast from or to restricted __le32 for driver") Link: https://lore.kernel.org/r/1600509802-44382-7-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Solve the overflow of the calc_pg_sz()Jiaran Zhang2020-09-241-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | calc_pg_sz() may gets a data calculation overflow if the PAGE_SIZE is 64 KB and hop_num is 2. It is because that all variables involved in calculation are defined in type of int. So change the type of bt_chunk_size, buf_chunk_size and obj_per_chunk_default to u64. Fixes: ba6bb7e97421 ("RDMA/hns: Add interfaces to get pf capabilities from firmware") Link: https://lore.kernel.org/r/1600509802-44382-6-git-send-email-liweihang@huawei.com Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Add check for the validity of sl configurationJiaran Zhang2020-09-242-2/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | According to the RoCE v1 specification, the sl (service level) 0-7 are mapped directly to priorities 0-7 respectively, sl 8-15 are reserved. The driver should verify whether the the value of sl is larger than 7, if so, an exception should be returned. Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1600509802-44382-5-git-send-email-liweihang@huawei.com Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Correct typo of hns_roce_create_cq()Lang Cheng2020-09-241-1/+1
| | | | | | | | | | | | | | | | | | | | Change "initialze" to "initialize". Fixes: 8f3e9f3ea087 ("IB/hns: Add code for refreshing CQ CI using TPTR") Link: https://lore.kernel.org/r/1600509802-44382-4-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Add interception for resizing SRQsYangyang Li2020-09-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | HIP08 doesn't support modifying the maximum number of outstanding WR in an SRQ. However, the driver does not return a failure message, and users may mistakenly think that the resizing is executed successfully. So the driver needs to intercept this operation. Fixes: ffb1308b88b6 ("RDMA/hns: Move SRQ code to the reasonable place") Link: https://lore.kernel.org/r/1600509802-44382-3-git-send-email-liweihang@huawei.com Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Refactor process about opcode in post_send()Weihang Li2020-09-241-55/+76
| | | | | | | | | | | | | | | | | | | | | | According to the IB specifications, the verbs should return an immediate error when the users set an unsupported opcode. Furthermore, refactor codes about opcode in process of post_send to make the difference between opcodes clearer. Link: https://lore.kernel.org/r/1600509802-44382-2-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Add support for SCCC in size of 64 BytesYangyang Li2020-09-246-12/+38
| | | | | | | | | | | | | | | | | | | | | | For HIP09, size of SCCC (Soft Congestion Control Context) is increased to 64 Bytes from 32 Bytes. The hardware will get the configuration of SCCC from driver instead of using a fixed value. Link: https://lore.kernel.org/r/1600245806-56321-5-git-send-email-liweihang@huawei.com Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Add support for QPC in size of 512 BytesWenpeng Liang2020-09-246-18/+75
| | | | | | | | | | | | | | | | | | | | | | | | The new version of RoCEE supports using QPC in size of 256B or 512B, so that HIP09 can supports new congestion control algorithms by using QPC in larger size. Link: https://lore.kernel.org/r/1600245806-56321-4-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Add support for CQE in size of 64 BytesWenpeng Liang2020-09-247-16/+48
| | | | | | | | | | | | | | | | | | | | The new version of RoCEE supports using CQE in size of 32B or 64B. The performance of bus can be improved by using larger size of CQE. Link: https://lore.kernel.org/r/1600245806-56321-3-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/hns: Add support for EQE in size of 64 BytesWenpeng Liang2020-09-244-20/+44
| | | | | | | | | | | | | | | | | | | | | | The new version of RoCEE supports using CEQE in size of 4B or 64B, AEQE in size of 16B or 64B. The performance of bus can be improved by using larger size of EQE. Link: https://lore.kernel.org/r/1600245806-56321-2-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * RDMA/efa: Drop double zeroing for sg_init_table()Julia Lawall2020-09-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sg_init_table() zeroes its first argument, so the allocation of that argument doesn't have to. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression x,n,flags; @@ x = - kcalloc + kmalloc_array (n,sizeof(*x),flags) ... sg_init_table(x,n) // </smpl> Link: https://lore.kernel.org/r/1600601186-7420-6-git-send-email-Julia.Lawall@inria.fr Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Acked-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>