summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* IB/mlx5: Add HW counter called rx_dct_connectShetu Ayalew2023-07-311-0/+2
| | | | | | | | | | The rx_dct_connect counter shows the number of received connection requests for the associated DCTs. Signed-off-by: Shetu Ayalew <shetu@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Link: https://lore.kernel.org/r/01cd24cd7f591734741309921fdc01fc770d84a8.1690121941.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
* RDMA/mthca: Remove unnecessary NULL assignmentsRuan Jinjie2023-07-311-10/+10
| | | | | | | | | There are many pointers assigned first, which need not to be initialized, so remove the NULL assignments. Signed-off-by: Ruan Jinjie <ruanjinjie@huawei.com> Link: https://lore.kernel.org/r/20230731065543.2285928-1-ruanjinjie@huawei.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Fix one kernel-doc commentYang Li2023-07-311-1/+0
| | | | | | | | | | | | | | Remove description of @free_hwcqp in irdma_destroy_cqp(). to silence the warning: drivers/infiniband/hw/irdma/hw.c:580: warning: Excess function parameter 'free_hwcqp' description in 'irdma_destroy_cqp' Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6028 Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Link: https://lore.kernel.org/r/20230731015915.34867-1-yang.lee@linux.alibaba.com Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/siw: Fix tx thread initialization.Bernard Metzler2023-07-313-44/+43
| | | | | | | | | | | | | | Immediately removing the siw module after insertion may crash in siw_stop_tx_thread(), if the according thread did not yet had a chance to initialize its wait queue and siw_stop_tx_thread() tries to wakeup that thread. Initializing the threads state before spwaning it fixes it. Reported-by: Guoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: Bernard Metzler <bmt@zurich.ibm.com> Link: https://lore.kernel.org/r/20230728114418.124328-1-bmt@zurich.ibm.com Tested-by: Guoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/mlx: Remove unnecessary variable initializationsRuan Jinjie2023-07-312-42/+42
| | | | | | | | Remove unnecessary variable initializations. Signed-off-by: Ruan Jinjie <ruanjinjie@huawei.com> Link: https://lore.kernel.org/r/20230728065139.3411703-1-ruanjinjie@huawei.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Use HW specific minimum WQ sizeSindhu Devale2023-07-309-5/+19
| | | | | | | | | | | | | HW GEN1 and GEN2 have different min WQ sizes but they are currently set to the same value. Use a gen specific attribute min_hw_wq_size and extend ABI to pass it to user-space. Signed-off-by: Sindhu Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20230725155525.1081-3-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Allow accurate reporting on QP max send/recv WRSindhu Devale2023-07-305-87/+203
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the attribute cap.max_send_wr and cap.max_recv_wr sent from user-space during create QP are the provider computed SQ/RQ depth as opposed to raw values passed from application. This inhibits computation of an accurate value for max_send_wr and max_recv_wr for this QP in the kernel which matches the value returned in user create QP. Also these capabilities needs to be reported from the driver in query QP. Add support by extending the ABI to allow the raw cap.max_send_wr and cap.max_recv_wr to be passed from user-space, while keeping compatibility for the older scheme. The internal HW depth and shift needed for the WQs needs to be computed now for both kernel and user-mode QPs. Add new helpers to assist with this: irdma_uk_calc_depth_shift_sq, irdma_uk_calc_depth_shift_rq and irdma_uk_calc_depth_shift_wq. Consolidate all the user mode QP setup into a new function irdma_setup_umode_qp which keeps it with its counterpart irdma_setup_kmode_qp. Signed-off-by: Youvaraj Sagar <youvaraj.sagar@intel.com> Signed-off-by: Sindhu Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20230725155525.1081-2-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/core: Get IB width and speed from netdevHaoyue Xu2023-07-301-21/+79
| | | | | | | | | | | | | | | | | Previously, there was no way to query the number of lanes for a network card, so the same netdev_speed would result in a fixed pair of width and speed. As network card specifications become more diverse, such fixed mode is no longer suitable, so a method is needed to obtain the correct width and speed based on the number of lanes. This patch retrieves netdev lanes and speed from net_device and translates them to IB width and speed. Signed-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com> Signed-off-by: Luoyouming <luoyouming@huawei.com> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20230721092052.2090449-1-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* bnxt_re: Update the debug counters for doorbell pacingChandramohan Akula2023-07-303-0/+32
| | | | | | | | | | Add debug counters to track the Doorbell pacing events and report the doorbell pacing debug stats. Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1690383081-15033-5-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* bnxt_re: Expose the missing hw countersChandramohan Akula2023-07-303-2/+39
| | | | | | | | | | Add code to expose some of the HW counters related to tx/rx data and Congestion control. Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1690383081-15033-4-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* bnxt_re: Update the hw counters for resource statsChandramohan Akula2023-07-303-9/+94
| | | | | | | | | | | | Report the additional resource counters which enables better debugging. Includes active RC/UD QPs, Watermark of the resources and a count that indicates the resize cq operations after driver load. Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1690383081-15033-3-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* bnxt_re: Reorganize the resource statsChandramohan Akula2023-07-305-42/+47
| | | | | | | | | Move the resource stats to a separate stats structure. Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Link: https://lore.kernel.org/r/1690383081-15033-2-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Cleanup and rename irdma_netdev_vlan_ipv6()Mustafa Ismail2023-07-263-15/+12
| | | | | | | | | | The return value from irdma_netdev_vlan_ipv6() is not used. Rename the functions and change to a void return. Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20230725155505.1069-5-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Add table based lookup for CQ pointer during an eventKrzysztof Czurylo2023-07-265-8/+57
| | | | | | | | | | | | | | | | | | | | Add a CQ table based loookup to allow quick search for CQ pointer having CQ ID in case of CQ related asynchrononous event. The table is implemented in a similar fashion to QP table. Also add a reference counters for CQ. This is to prevent destroying CQ while an asynchronous event is being processed. The memory resource table size is sized higher with this update, and this table doesn't need to be physically contiguous, so use a vzalloc vs kzalloc to allocate the table. Signed-off-by: Krzysztof Czurylo <krzysztof.czurylo@intel.com> Signed-off-by: Sindhu Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20230725155505.1069-4-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Refactor error handling in create CQPSindhu Devale2023-07-261-14/+21
| | | | | | | | | | | | | | In case of a failure in irdma_create_cqp, do not call irdma_destroy_cqp, but cleanup all the allocated resources in reverse order. Drop the extra argument in irdma_destroy_cqp as its no longer needed. Signed-off-by: Krzysztof Czurylo <krzysztof.czurylo@intel.com> Signed-off-by: Sindhu Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20230725155505.1069-3-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Drop a local in irdma_sc_get_next_aeqeSindhu Devale2023-07-261-4/+1
| | | | | | | | | | | Drop the local wqe_idx in irdma_sc_get_next_aeqe and instead store the wqe_idx in the info structure for all asynchronous events(AE) received. There is no reason it should be tied to a specific AE source. Signed-off-by: Sindhu Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20230725155505.1069-2-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* IB/hfi1: Use struct_size()Christophe JAILLET2023-07-241-6/+3
| | | | | | | | | | | Use struct_size() instead of hand-writing it, when allocating a structure with a flex array. This is less verbose, more robust and more informative. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/f4618a67d5ae0a30eb3f2b4558c8cc790feed79a.1690044376.git.christophe.jaillet@wanadoo.fr Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/hns: Remove VF extend configurationJunxian Huang2023-07-243-85/+10
| | | | | | | | | Remove VF extend configuration since the relative registers are configured in firmware currently. Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20230721025146.450831-3-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/hns: Support get XRCD number from firmwareLuoyouming2023-07-242-4/+4
| | | | | | | | | Support driver get the num of XRCD from firmware. Signed-off-by: Luoyouming <luoyouming@huawei.com> Signed-off-by: Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20230721025146.450831-2-huangjunxian6@hisilicon.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/qedr: Remove duplicate assignments of vaMinjie Du2023-07-231-1/+0
| | | | | | | | Avoid double assignment of iwqp->ietf_mem.va. Signed-off-by: Minjie Du <duminjie@vivo.com> Link: https://lore.kernel.org/r/20230705031849.2443-1-duminjie@vivo.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/qedr: Remove a duplicate assignment in qedr_create_gsi_qp()Minjie Du2023-07-231-1/+0
| | | | | | | | | Delete a duplicate statement from this function implementation. Signed-off-by: Minjie Du <duminjie@vivo.com> Link: https://lore.kernel.org/r/20230705103950.15225-1-duminjie@vivo.com Acked-by: Alok Prasad <palok@marvell.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/bnxt_re: Add a new uapi for driver notificationChandramohan Akula2023-07-212-0/+19
| | | | | | | | | | Add driver notify uapi for application notifying the driver about the doorbell FIFO congestion. Link: https://lore.kernel.org/r/1689742977-9128-8-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/bnxt_re: Implement doorbell pacing algorithmChandramohan Akula2023-07-212-0/+129
| | | | | | | | | | | | | | | | User applications alert the driver when the Doorbell FIFO reaches the alarm threshold. The driver updates the pacing parameters in the shared page to do the maximum pacing by the application till the DB FIFO congestion reduces to pacing threshold. Driver keeps checking the DB FIFO depth at the pacing interval and gradually adjusts the pacing level. Once the pacing level reaches default values (no congestion in the FIFO) pacing gets completed. Link: https://lore.kernel.org/r/1689742977-9128-7-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/bnxt_re: Update alloc_page uapi for pacingChandramohan Akula2023-07-213-3/+36
| | | | | | | | | | Update the alloc_page uapi functionality for handling the mapping of doorbell pacing shared page and bar address. Link: https://lore.kernel.org/r/1689742977-9128-6-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/bnxt_re: Enable pacing support for the user appsChandramohan Akula2023-07-212-0/+3
| | | | | | | | | Report the pacing capability to the user applications. Link: https://lore.kernel.org/r/1689742977-9128-5-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/bnxt_re: Initialize Doorbell pacing featureChandramohan Akula2023-07-213-0/+137
| | | | | | | | | | | | Checks for pacing feature capability and get the doorbell pacing configuration using FW commands. Allocate a page and initialize the pacing parameters for the applications. Cleanup the page and de-initialize the pacing during device removal. Link: https://lore.kernel.org/r/1689742977-9128-4-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* bnxt_en: Share the bar0 address with the RoCE driverChandramohan Akula2023-07-212-1/+2
| | | | | | | | | | | Add a parameter in the bnxt_en_dev structure to share the bar0 address with RoCE driver. Link: https://lore.kernel.org/r/1689742977-9128-3-git-send-email-selvin.xavier@broadcom.com CC: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* bnxt_en: Update HW interface headersChandramohan Akula2023-07-211-0/+54
| | | | | | | | | | | | Updating the HW structures for the doorbell pacing related information. Newly added interface structures will be used in the followup patches. Link: https://lore.kernel.org/r/1689742977-9128-2-git-send-email-selvin.xavier@broadcom.com CC: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Chandramohan Akula <chandramohan.akula@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/cma: Avoid GID lookups on iWARP devicesChuck Lever2023-07-211-0/+21
| | | | | | | | | | | | | | | | We would like to enable the use of siw on top of a VPN that is constructed and managed via a tun device. That hasn't worked up until now because ARPHRD_NONE devices (such as tun devices) have no GID for the RDMA/core to look up. But it turns out that the egress device has already been picked for us -- no GID is necessary. addr_handler() just has to do the right thing with it. Link: https://lore.kernel.org/r/168960675257.3007.4737911174148394395.stgit@manet.1015granger.net Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/cma: Deduplicate error flow in cma_validate_port()Chuck Lever2023-07-211-5/+6
| | | | | | | | Clean up to prepare for the addition of new logic. Link: https://lore.kernel.org/r/168960674597.3007.6128252077812202526.stgit@manet.1015granger.net Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/core: Set gid_attr.ndev for iWARP devicesChuck Lever2023-07-211-0/+11
| | | | | | | | | | | Have the iwarp side properly set the ndev in the device's sgid_attrs so that address resolution can treat it more like a RoCE device. Link: https://lore.kernel.org/r/168960673933.3007.8043081822081877578.stgit@manet.1015granger.net Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Tom Talpey <tom@talpey.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/siw: Fabricate a GID on tun and loopback devicesChuck Lever2023-07-213-16/+11
| | | | | | | | | | | | | | | | | | | | | | LOOPBACK and NONE (tunnel) devices have all-zero MAC addresses. Currently, siw_device_create() falls back to copying the IB device's name in those cases, because an all-zero MAC address breaks the RDMA core address resolution mechanism. However, at the point when siw_device_create() constructs a GID, the ib_device::name field is uninitialized, leaving the MAC address to remain in an all-zero state. Fabricate a random artificial GID for such devices, and ensure this artificial GID is returned for all device query operations. Link: https://lore.kernel.org/r/168960673260.3007.12378736853793339110.stgit@manet.1015granger.net Reported-by: Tom Talpey <tom@talpey.com> Fixes: a2d36b02c15d ("RDMA/siw: Enable siw on tunnel devices") Reviewed-by: Bernard Metzler <bmt@zurich.ibm.com> Reviewed-by: Tom Talpey <tom@talpey.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/bnxt_re: use vmalloc_array and vcallocJulia Lawall2023-07-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use vmalloc_array and vcalloc to protect against multiplication overflows. The changes were done using the following Coccinelle semantic patch: // <smpl> @initialize:ocaml@ @@ let rename alloc = match alloc with "vmalloc" -> "vmalloc_array" | "vzalloc" -> "vcalloc" | _ -> failwith "unknown" @@ size_t e1,e2; constant C1, C2; expression E1, E2, COUNT, x1, x2, x3; typedef u8; typedef __u8; type t = {u8,__u8,char,unsigned char}; identifier alloc = {vmalloc,vzalloc}; fresh identifier realloc = script:ocaml(alloc) { rename alloc }; @@ ( alloc(x1*x2*x3) | alloc(C1 * C2) | alloc((sizeof(t)) * (COUNT), ...) | - alloc((e1) * (e2)) + realloc(e1, e2) | - alloc((e1) * (COUNT)) + realloc(COUNT, e1) | - alloc((E1) * (E2)) + realloc(E1, E2) ) // </smpl> Link: https://lore.kernel.org/r/20230627144339.144478-20-Julia.Lawall@inria.fr Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/siw: use vmalloc_array and vcallocJulia Lawall2023-07-212-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use vmalloc_array and vcalloc to protect against multiplication overflows. The changes were done using the following Coccinelle semantic patch: // <smpl> @initialize:ocaml@ @@ let rename alloc = match alloc with "vmalloc" -> "vmalloc_array" | "vzalloc" -> "vcalloc" | _ -> failwith "unknown" @@ size_t e1,e2; constant C1, C2; expression E1, E2, COUNT, x1, x2, x3; typedef u8; typedef __u8; type t = {u8,__u8,char,unsigned char}; identifier alloc = {vmalloc,vzalloc}; fresh identifier realloc = script:ocaml(alloc) { rename alloc }; @@ ( alloc(x1*x2*x3) | alloc(C1 * C2) | alloc((sizeof(t)) * (COUNT), ...) | - alloc((e1) * (e2)) + realloc(e1, e2) | - alloc((e1) * (COUNT)) + realloc(COUNT, e1) | - alloc((E1) * (E2)) + realloc(E1, E2) ) // </smpl> Link: https://lore.kernel.org/r/20230627144339.144478-15-Julia.Lawall@inria.fr Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/erdma: use vmalloc_array and vcallocJulia Lawall2023-07-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use vmalloc_array and vcalloc to protect against multiplication overflows. The changes were done using the following Coccinelle semantic patch: // <smpl> @initialize:ocaml@ @@ let rename alloc = match alloc with "vmalloc" -> "vmalloc_array" | "vzalloc" -> "vcalloc" | _ -> failwith "unknown" @@ size_t e1,e2; constant C1, C2; expression E1, E2, COUNT, x1, x2, x3; typedef u8; typedef __u8; type t = {u8,__u8,char,unsigned char}; identifier alloc = {vmalloc,vzalloc}; fresh identifier realloc = script:ocaml(alloc) { rename alloc }; @@ ( alloc(x1*x2*x3) | alloc(C1 * C2) | alloc((sizeof(t)) * (COUNT), ...) | - alloc((e1) * (e2)) + realloc(e1, e2) | - alloc((e1) * (COUNT)) + realloc(COUNT, e1) | - alloc((E1) * (E2)) + realloc(E1, E2) ) // </smpl> Link: https://lore.kernel.org/r/20230627144339.144478-6-Julia.Lawall@inria.fr Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/irdma: Fix building without IPv6Arnd Bergmann2023-07-191-1/+1
| | | | | | | | | | | | | | | | The new irdma_iw_get_vlan_prio() function requires IPv6 support to build: x86_64-linux-ld: drivers/infiniband/hw/irdma/cm.o: in function `irdma_iw_get_vlan_prio': cm.c:(.text+0x2832): undefined reference to `ipv6_chk_addr' Add a compile-time check in the same way as elsewhere in this file to avoid this by conditionally leaving out the ipv6 specific bits. Fixes: f877f22ac1e9b ("RDMA/irdma: Implement egress VLAN priority") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20230718193835.3546684-1-arnd@kernel.org Acked-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/irdma: Implement egress VLAN priorityMustafa Ismail2023-07-122-12/+99
| | | | | | | | | | When a VLAN interface is in use, get and use the VLAN egress mapping. Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Link: https://lore.kernel.org/r/20230711175318.1301-1-shiraz.saleem@intel.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/qedr: Remove a duplicate assignment in irdma_query_ah()Minjie Du2023-07-121-1/+0
| | | | | | | | | | Delete a duplicate statement from this function implementation. Fixes: b48c24c2d710 ("RDMA/irdma: Implement device supported verb APIs") Signed-off-by: Minjie Du <duminjie@vivo.com> Acked-by: Alok Prasad <palok@marvell.com> Link: https://lore.kernel.org/r/20230706022704.1260-1-duminjie@vivo.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/efa: Add RDMA write HW statistics countersMichael Margolin2023-07-124-2/+47
| | | | | | | | | | | | | Update device API and request RDMA write counters if RDMA write is supported by device. Expose newly added counters through ib core counters mechanism. Reviewed-by: Daniel Kranzdorf <dkkranzd@amazon.com> Reviewed-by: Yonatan Nachum <ynachum@amazon.com> Signed-off-by: Michael Margolin <mrgolin@amazon.com> Link: https://lore.kernel.org/r/20230703153404.30877-1-mrgolin@amazon.com Reviewed-by: Gal Pressman <gal.pressman@linux.dev> Signed-off-by: Leon Romanovsky <leon@kernel.org>
* RDMA/mlx5: align MR mem allocation size to power-of-twoYuanyuan Zhong2023-07-121-0/+5
| | | | | | | | | | | | | | | | | The MR memory allocation requests extra bytes to guarantee that there is enough space to find the memory aligned to MLX5_UMR_ALIGN. For power-of-two sizes, the alignment can be guaranteed by kmalloc() according to commit 59bb47985c1d ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)"). So if target alignment is power-of-two and adding the extra bytes crosses a power-of-two boundary, use the next power-of-two as the allocation size. Signed-off-by: Yuanyuan Zhong <yzhong@purestorage.com> Link: https://lore.kernel.org/r/20230629213248.3184245-2-yzhong@purestorage.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
* Linux 6.5-rc1v6.5-rc1Linus Torvalds2023-07-091-2/+2
|
* MAINTAINERS 2: Electric BoogalooLinus Torvalds2023-07-091-46/+46
| | | | | | | | | | | | | | | | | | | We just sorted the entries and fields last release, so just out of a perverse sense of curiosity, I decided to see if we can keep things ordered for even just one release. The answer is "No. No we cannot". I suggest that all kernel developers will need weekly training sessions, involving a lot of Big Bird and Sesame Street. And at the yearly maintainer summit, we will all sing the alphabet song together. I doubt I will keep doing this. At some point "perverse sense of curiosity" turns into just a cold dark place filled with sadness and despair. Repeats: 80e62bc8487b ("MAINTAINERS: re-sort all entries and fields") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'dma-mapping-6.5-2023-07-09' of ↵Linus Torvalds2023-07-091-11/+35
|\ | | | | | | | | | | | | | | | | | | | | | | git://git.infradead.org/users/hch/dma-mapping Pull dma-mapping fixes from Christoph Hellwig: - swiotlb area sizing fixes (Petr Tesarik) * tag 'dma-mapping-6.5-2023-07-09' of git://git.infradead.org/users/hch/dma-mapping: swiotlb: reduce the number of areas to match actual memory pool size swiotlb: always set the number of areas before allocating the pool
| * swiotlb: reduce the number of areas to match actual memory pool sizePetr Tesarik2023-06-291-3/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although the desired size of the SWIOTLB memory pool is increased in swiotlb_adjust_nareas() to match the number of areas, the actual allocation may be smaller, which may require reducing the number of areas. For example, Xen uses swiotlb_init_late(), which in turn uses the page allocator. On x86, page size is 4 KiB and MAX_ORDER is 10 (1024 pages), resulting in a maximum memory pool size of 4 MiB. This corresponds to 2048 slots of 2 KiB each. The minimum area size is 128 (IO_TLB_SEGSIZE), allowing at most 2048 / 128 = 16 areas. If num_possible_cpus() is greater than the maximum number of areas, areas are smaller than IO_TLB_SEGSIZE and contiguous groups of free slots will span multiple areas. When allocating and freeing slots, only one area will be properly locked, causing race conditions on the unlocked slots and ultimately data corruption, kernel hangs and crashes. Fixes: 20347fca71a3 ("swiotlb: split up the global swiotlb lock") Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Reviewed-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
| * swiotlb: always set the number of areas before allocating the poolPetr Tesarik2023-06-291-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The number of areas defaults to the number of possible CPUs. However, the total number of slots may have to be increased after adjusting the number of areas. Consequently, the number of areas must be determined before allocating the memory pool. This is even explained with a comment in swiotlb_init_remap(), but swiotlb_init_late() adjusts the number of areas after slots are already allocated. The areas may end up being smaller than IO_TLB_SEGSIZE, which breaks per-area locking. While fixing swiotlb_init_late(), move all relevant comments before the definition of swiotlb_adjust_nareas() and convert them to kernel-doc. Fixes: 20347fca71a3 ("swiotlb: split up the global swiotlb lock") Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Reviewed-by: Roberto Sassu <roberto.sassu@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* | Merge tag 'irq_urgent_for_v6.5_rc1' of ↵Linus Torvalds2023-07-091-3/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq update from Borislav Petkov: - Optimize IRQ domain's name assignment * tag 'irq_urgent_for_v6.5_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqdomain: Use return value of strreplace()
| * | irqdomain: Use return value of strreplace()Andy Shevchenko2023-06-301-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since strreplace() returns the pointer to the string itself, use it directly. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230628150251.17832-1-andriy.shevchenko@linux.intel.com
* | | Merge tag 'x86_urgent_for_v6.5_rc1' of ↵Linus Torvalds2023-07-091-0/+1
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fpu fix from Borislav Petkov: - Do FPU AP initialization on Xen PV too which got missed by the recent boot reordering work * tag 'x86_urgent_for_v6.5_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/xen: Fix secondary processors' FPU initialization
| * | | x86/xen: Fix secondary processors' FPU initializationJuergen Gross2023-07-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Moving the call of fpu__init_cpu() from cpu_init() to start_secondary() broke Xen PV guests, as those don't call start_secondary() for APs. Call fpu__init_cpu() in Xen's cpu_bringup(), which is the Xen PV replacement of start_secondary(). Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230703130032.22916-1-jgross@suse.com
* | | | Merge tag 'x86-core-2023-07-09' of ↵Linus Torvalds2023-07-091-0/+8
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fix from Thomas Gleixner: "A single fix for the mechanism to park CPUs with an INIT IPI. On shutdown or kexec, the kernel tries to park the non-boot CPUs with an INIT IPI. But the same code path is also used by the crash utility. If the CPU which panics is not the boot CPU then it sends an INIT IPI to the boot CPU which resets the machine. Prevent this by validating that the CPU which runs the stop mechanism is the boot CPU. If not, leave the other CPUs in HLT" * tag 'x86-core-2023-07-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/smp: Don't send INIT to boot CPU