summaryrefslogtreecommitdiffstats
path: root/drivers (follow)
Commit message (Collapse)AuthorAgeFilesLines
* RDMA/mlx5: Fix MR cache memory leakMaor Gottlieb2020-12-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | If the MR cache entry invalidation failed, then we detach this entry from the cache, therefore we must to free the memory as well. Allcation backtrace for the leaker: [<00000000d8e423b0>] alloc_cache_mr+0x23/0xc0 [mlx5_ib] [<000000001f21304c>] create_cache_mr+0x3f/0xf0 [mlx5_ib] [<000000009d6b45dc>] mlx5_ib_alloc_implicit_mr+0x41/0×210 [mlx5_ib] [<00000000879d0d68>] mlx5_ib_reg_user_mr+0x9e/0×6e0 [mlx5_ib] [<00000000be74bf89>] create_qp+0x2fc/0xf00 [ib_uverbs] [<000000001a532d22>] ib_uverbs_handler_UVERBS_METHOD_COUNTERS_READ+0x1d9/0×230 [ib_uverbs] [<0000000070f46001>] rdma_alloc_commit_uobject+0xb5/0×120 [ib_uverbs] [<000000006d8a0b38>] uverbs_alloc+0x2b/0xf0 [ib_uverbs] [<00000000075217c9>] ksysioctl+0x234/0×7d0 [<00000000eb5c120b>] __x64_sys_ioctl+0x16/0×20 [<00000000db135b48>] do_syscall_64+0x59/0×2e0 Fixes: 1769c4c57548 ("RDMA/mlx5: Always remove MRs from the cache before destroying them") Link: https://lore.kernel.org/r/20201213132940.345554-2-leon@kernel.org Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/rxe: Use acquire/release for memory orderingBob Pearson2020-12-123-50/+60
| | | | | | | | | | Change work and completion queues to use smp_load_acquire() and smp_store_release() to synchronize between driver and users. This commit goes with a matching series of commits in the rxe user space provider. Link: https://lore.kernel.org/r/20201210174258.5234-1-rpearson@hpe.com Signed-off-by: Bob Pearson <rpearson@hpe.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Simplify AEQE process for different types of queueYixian Liu2020-12-113-62/+23
| | | | | | | | | | | | | | | There is no need to get queue number repeatly for different queues from an AEQE entity, as they are the same. Furthermore, redefine the AEQE structure to make the codes more readable. In addition, HNS_ROCE_EVENT_TYPE_CEQ_OVERFLOW is removed because the hardware never reports this event. Link: https://lore.kernel.org/r/1607650657-35992-12-git-send-email-liweihang@huawei.com Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Fix inaccurate printsYixing Liu2020-12-118-90/+107
| | | | | | | | | | Some %d in print format string should be %u, and some prints miss the useful errno or are in nonstandard format. Just fix above issues. Link: https://lore.kernel.org/r/1607650657-35992-11-git-send-email-liweihang@huawei.com Signed-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Fix incorrect symbol typesWenpeng Liang2020-12-1110-73/+73
| | | | | | | | | | Types of some fields, variables and parameters of some functions should be unsigned. Link: https://lore.kernel.org/r/1607650657-35992-10-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Clear redundant variable initializationXinhao Liu2020-12-112-5/+5
| | | | | | | | | | There is no need to initialize some variable because they will be assigned with a value later. Link: https://lore.kernel.org/r/1607650657-35992-9-git-send-email-liweihang@huawei.com Signed-off-by: Xinhao Liu <liuxinhao5@hisilicon.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Fix coding style issuesLang Cheng2020-12-1113-51/+43
| | | | | | | | | | | Just format the code without modifying anything, including fixing some redundant and missing blanks and spaces and changing the variable definition order. Link: https://lore.kernel.org/r/1607650657-35992-8-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Remove unnecessary access right set during INIT2INITYixian Liu2020-12-111-46/+0
| | | | | | | | | | | | | As the qp access right is checked and setted in common function hns_roce_v2_set_opt_fields(), there is no need to set again for a special case INIT2INIT. Fixes: 926a01dc000d ("RDMA/hns: Add QP operations support for hip08 SoC") Fixes: 7db82697b8bf ("RDMA/hns: Add support for extended atomic in userspace") Link: https://lore.kernel.org/r/1607650657-35992-7-git-send-email-liweihang@huawei.com Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: WARN_ON if get a reserved sl from usersWeihang Li2020-12-111-0/+4
| | | | | | | | | | | According to the RoCE v1 specification, the sl (service level) 0-7 are mapped directly to priorities 0-7 respectively, sl 8-15 are reserved. The driver should verify whether the value of sl is larger than 7, if so, an exception should be returned. Link: https://lore.kernel.org/r/1607650657-35992-6-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Avoid filling sl in high 3 bits of vlan_idWeihang Li2020-12-111-10/+1
| | | | | | | | | | | Only the low 12 bits of vlan_id is valid, and service level has been filled in Address Vector. So there is no need to fill sl in vlan_id in Address Vector. Fixes: 7406c0036f85 ("RDMA/hns: Only record vlan info for HIP08") Link: https://lore.kernel.org/r/1607650657-35992-5-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Do shift on traffic class when using RoCEv2Weihang Li2020-12-113-8/+12
| | | | | | | | | | | | The high 6 bits of traffic class in GRH is DSCP (Differentiated Services Codepoint), the driver should shift it before the hardware gets it when using RoCEv2. Fixes: 606bf89e98ef ("RDMA/hns: Refactor for hns_roce_v2_modify_qp function") Fixes: fba429fcf9a5 ("RDMA/hns: Fix missing fields in address vector") Link: https://lore.kernel.org/r/1607650657-35992-4-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Normalization the judgment of some featuresWenpeng Liang2020-12-113-7/+7
| | | | | | | | | | | | Whether to enable the these features should better depend on the enable flags, not the value of related fields. Fixes: 5c1f167af112 ("RDMA/hns: Init SRQ table for hip08") Fixes: 3cb2c996c9dc ("RDMA/hns: Add support for SCCC in size of 64 Bytes") Link: https://lore.kernel.org/r/1607650657-35992-3-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Limit the length of data copied between kernel and userspaceWenpeng Liang2020-12-115-16/+22
| | | | | | | | | | | | | | | | | For ib_copy_from_user(), the length of udata may not be the same as that of cmd. For ib_copy_to_user(), the length of udata may not be the same as that of resp. So limit the length to prevent out-of-bounds read and write operations from ib_copy_from_user() and ib_copy_to_user(). Fixes: de77503a5940 ("RDMA/hns: RDMA/hns: Assign rq head pointer when enable rq record db") Fixes: 633fb4d9fdaa ("RDMA/hns: Use structs to describe the uABI instead of opencoding") Fixes: ae85bf92effc ("RDMA/hns: Optimize qp param setup flow") Fixes: 6fd610c5733d ("RDMA/hns: Support 0 hop addressing for SRQ buffer") Fixes: 9d9d4ff78884 ("RDMA/hns: Update the kernel header file of hns") Link: https://lore.kernel.org/r/1607650657-35992-2-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/mlx4: Remove bogus dev_base_lock usageVladimir Oltean2020-12-101-3/+0
| | | | | | | | | | | | | | | | | | | It is not clear what this lock protects. If the authors wanted to ensure that "dev" does not disappear, that is impossible, given the following code path: mlx4_ib_netdev_event (under RTNL mutex) -> mlx4_ib_scan_netdevs -> mlx4_ib_update_qps Also, the dev_base_lock does not protect dev->dev_addr either. So it serves no purpose here. Remove it. Link: https://lore.kernel.org/r/20201208193928.1500893-1-vladimir.oltean@nxp.com Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/uverbs: Fix incorrect variable typeAvihai Horon2020-12-101-9/+5
| | | | | | | | | | | | | | | | Fix incorrect type of max_entries in UVERBS_METHOD_QUERY_GID_TABLE - max_entries is of type size_t although it can take negative values. The following static check revealed it: drivers/infiniband/core/uverbs_std_types_device.c:338 ib_uverbs_handler_UVERBS_METHOD_QUERY_GID_TABLE() warn: 'max_entries' unsigned <= 0 Fixes: 9f85cbe50aa0 ("RDMA/uverbs: Expose the new GID query API to user space") Link: https://lore.kernel.org/r/20201208073545.9723-4-leon@kernel.org Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/core: Do not indicate device ready when device enablement failsJack Morgenstein2020-12-101-3/+4
| | | | | | | | | | | | | | | | | | | In procedure ib_register_device, procedure kobject_uevent is called (advertising that the device is ready for userspace usage) even when device_enable_and_get() returned an error. As a result, various RDMA modules attempted to register for the device even while the device driver was preparing to unregister the device. Fix this by advertising the device availability only after enabling the device succeeds. Fixes: e7a5b4aafd82 ("RDMA/device: Don't fire uevent before device is fully initialized") Link: https://lore.kernel.org/r/20201208073545.9723-3-leon@kernel.org Suggested-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/core: Clean up cq pool mechanismJack Morgenstein2020-12-103-15/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CQ pool mechanism had two problems: 1. The CQ pool lists were uninitialized in the device registration error flow. As a result, all the list pointers remained NULL. This caused the kernel to crash (in procedure ib_cq_pool_destroy) when that error flow was taken (and unregister called). The stack trace snippet: BUG: kernel NULL pointer dereference, address: 0000000000000000 #PF: supervisor read access in kernel mode #PF: error_code(0×0000) ? not-present page PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI . . . RIP: 0010:ib_cq_pool_destroy+0x1b/0×70 [ib_core] . . . Call Trace: disable_device+0x9f/0×130 [ib_core] __ib_unregister_device+0x35/0×90 [ib_core] ib_register_device+0x529/0×610 [ib_core] __mlx5_ib_add+0x3a/0×70 [mlx5_ib] mlx5_add_device+0x87/0×1c0 [mlx5_core] mlx5_register_interface+0x74/0xc0 [mlx5_core] do_one_initcall+0x4b/0×1f4 do_init_module+0x5a/0×223 load_module+0x1938/0×1d40 2. At device unregister, when cleaning up the cq pool, the cq's in the pool lists were freed, but the cq entries were left in the list. The fix for the first issue is to initialize the cq pool lists when the ib_device structure is allocated for a new device (in procedure _ib_alloc_device). The fix for the second problem is to delete cq entries from the pool lists when cleaning up the cq pool. In addition, procedure ib_cq_pool_destroy() is renamed to the more appropriate name ib_cq_pool_cleanup(). Fixes: 4aa1615268a8 ("RDMA/core: Fix ordering of CQ pool destruction") Link: https://lore.kernel.org/r/20201208073545.9723-2-leon@kernel.org Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/core: Update kernel documentation for ib_create_named_qp()Lukas Bulwahn2020-12-101-0/+1
| | | | | | | | | | | | | | | | | | | | Commit 66f57b871efc ("RDMA/restrack: Support all QP types") extends ib_create_qp() to a named ib_create_named_qp(), which takes the caller's name as argument, but it did not add the new argument description to the function's kerneldoc. make htmldocs warns: ./drivers/infiniband/core/verbs.c:1206: warning: Function parameter or member 'caller' not described in 'ib_create_named_qp' Add a description for this new argument based on the description of the same argument in other related functions. Fixes: 66f57b871efc ("RDMA/restrack: Support all QP types") Link: https://lore.kernel.org/r/20201207173255.13355-1-lukas.bulwahn@gmail.com Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/mlx5: Remove unneeded semicolonTom Rix2020-12-101-1/+1
| | | | | | | | A semicolon is not needed after a switch statement. Link: https://lore.kernel.org/r/20201031134638.2135060-1-trix@redhat.com Signed-off-by: Tom Rix <trix@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/iser: Remove in_interrupt() usageSebastian Andrzej Siewior2020-12-071-17/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iser_initialize_task_headers() uses in_interrupt() to find out if it is safe to acquire a mutex. in_interrupt() is deprecated as it is ill defined and does not provide what it suggests. Aside of that it covers only parts of the contexts in which a mutex may not be acquired. The following callchains exist: iscsi_queuecommand() *locks* iscsi_session::frwd_lock -> iscsi_prep_scsi_cmd_pdu() -> session->tt->init_task() (iscsi_iser_task_init()) -> iser_initialize_task_headers() -> iscsi_iser_task_xmit() (iscsi_transport::xmit_task) -> iscsi_iser_task_xmit_unsol_data() -> iser_send_data_out() -> iser_initialize_task_headers() iscsi_data_xmit() *locks* iscsi_session::frwd_lock -> iscsi_prep_mgmt_task() -> session->tt->init_task() (iscsi_iser_task_init()) -> iser_initialize_task_headers() -> iscsi_prep_scsi_cmd_pdu() -> session->tt->init_task() (iscsi_iser_task_init()) -> iser_initialize_task_headers() __iscsi_conn_send_pdu() caller has iscsi_session::frwd_lock -> iscsi_prep_mgmt_task() -> session->tt->init_task() (iscsi_iser_task_init()) -> iser_initialize_task_headers() -> session->tt->xmit_task() ( The only callchain that is close to be invoked in preemptible context: iscsi_xmitworker() worker -> iscsi_data_xmit() -> iscsi_xmit_task() -> conn->session->tt->xmit_task() (iscsi_iser_task_xmit() In iscsi_iser_task_xmit() there is this check: if (!task->sc) return iscsi_iser_mtask_xmit(conn, task); so it does end up in iser_initialize_task_headers() and iser_initialize_task_headers() relies on iscsi_task::sc == NULL. Remove conditional locking of iser_conn::state_mutex because there is no call chain to do so. Remove the goto label and return early now that there is no clean up needed. Link: https://lore.kernel.org/r/20201204174256.62xfcvudndt7oufl@linutronix.de Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Max Gurtovoy <maxg@nvidia.com> Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: linux-rdma@vger.kernel.org Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/mlx5: Assign dev to DM MRMaor Gottlieb2020-12-074-30/+35
| | | | | | | | | | | | | | | | Currently, DM MR registration flow doesn't set the mlx5_ib_dev pointer and can cause a NULL pointer dereference if userspace dumps the MR via rdma tool. Assign the IB device together with the other fields and remove the redundant reference of mlx5_ib_dev from mlx5_ib_mr. Cc: stable@vger.kernel.org Fixes: 6c29f57ea475 ("IB/mlx5: Device memory mr registration support") Link: https://lore.kernel.org/r/20201203190807.127189-1-leon@kernel.org Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/hns: Move capability flags of QP and CQ to hns-abi.hWeihang Li2020-12-076-15/+1
| | | | | | | | | | | | These flags will be returned to the userspace through ABI, so they should be defined in hns-abi.h. Furthermore, there is no need to include hns-abi.h in every source files, it just needs to be included in the common header file. Link: https://lore.kernel.org/r/1606872560-17823-1-git-send-email-liweihang@huawei.com Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* IB: Fix kernel-doc markupsMauro Carvalho Chehab2020-12-0712-23/+25
| | | | | | | | | | | | Some functions have different names between their prototypes and the kernel-doc markup. Others need to be fixed, as kernel-doc markups should use this format: identifier - description Link: https://lore.kernel.org/r/78b98c41a5a0f4c0106433d305b143028a4168b0.1606823973.git.mchehab+huawei@kernel.org Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/bnxt_re: Fix max_qp_wrs reportedSelvin Xavier2020-12-071-1/+1
| | | | | | | | | | | | | While creating qps, the driver adds one extra entry to the sq size passed by the ULPs in order to avoid queue full condition. When ULPs creates QPs with max_qp_wr reported, driver creates QP with 1 more than the max_wqes supported by HW. Create QP fails in this case. To avoid this error, reduce 1 entry in max_qp_wqes and report it to the stack. Link: https://lore.kernel.org/r/1606741986-16477-1-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Devesh Sharma <devesh.sharma@broadcom.com> Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* RDMA/i40iw: Replace atomic_add_return(1, ..)Yejune Deng2020-12-071-3/+3
| | | | | | | | atomic_inc_return() is a little neater Link: https://lore.kernel.org/r/1606726376-7675-1-git-send-email-yejune.deng@gmail.com Signed-off-by: Yejune Deng <yejune.deng@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* Merge tag 'mlx5-next-2020-12-02' of ↵Jason Gunthorpe2020-12-0715-86/+150
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux Saeed Mahameed says: ==================== mlx5-next-2020-12-02 Low level mlx5 updates required by both netdev and rdma trees: net/mlx5: Treat host PF vport as other (non eswitch manager) vport net/mlx5: Enable host PF HCA after eswitch is initialized net/mlx5: Rename peer_pf to host_pf net/mlx5: Make API mlx5_core_is_ecpf accept const pointer net/mlx5: Export steering related functions net/mlx5: Expose other function ifc bits net/mlx5: Expose IP-in-IP TX and RX capability bits net/mlx5: Update the hardware interface definition for vhca state net/mlx5: Update the list of the PCI supported devices net/mlx5: Avoid exposing driver internal command helpers net/mlx5: Add ts_cqe_to_dest_cqn related bits net/mlx5: Add misc4 to mlx5_ifc_fte_match_param_bits net/mlx5: Check dr mask size against mlx5_match_param size net/mlx5: Add sampler destination type net/mlx5: Add sample offload hardware bits and structures ==================== Link: https://lore.kernel.org/r/20201203011010.213440-1-saeedm@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * net/mlx5: Treat host PF vport as other (non eswitch manager) vportParav Pandit2020-11-273-41/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | When eswitch manager is running on ECPF, host PF should be treated as non eswitch manager port, similar to other VF vports. Fail to do so, results in firmware treating PF's vport as ECPF vport for eswitch ACL tables. Non zero check to figure out if a given vport is other vport or not is not sufficient becase PF vport number = 0 on ECPF. Hence, create esw acl tables with an attribute of other vport. Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Enable host PF HCA after eswitch is initializedParav Pandit2020-11-274-19/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently ECPF enables external host PF too early in the initialization sequence for Ethernet links when ECPF is eswitch manager. Due to this, when external host PF driver is loaded, host PF's HCA CAP has inner_ip_version supported by NIC RX flow table. This capability is later updated by firmware after ECPF driver enables ENCAP/DECAP as eswitch manager. This results into a timing race condition, where CREATE_TIR command fails with a below syndrome on host PF. mlx5_cmd_check:775:(pid 510): CREATE_TIR(0x900) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x562b00) Hence, enable the external host PF after necessary eswitch and per vport initialization is completed. Continue to enable host PF when eswitch manager capability is off for a ECPF. Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Bodong Wang <bodong@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Rename peer_pf to host_pfParav Pandit2020-11-272-24/+39
| | | | | | | | | | | | | | | | To match the hardware spec, rename peer_pf to host_pf. Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Bodong Wang <bodong@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Export steering related functionsEli Cohen2020-11-271-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Export mlx5_create_flow_table() mlx5_create_flow_group() mlx5_destroy_flow_group(). These symbols are required by the VDPA implementation to create rules that consume VDPA specific traffic. We do not deal with putting the prototypes in a header file since they already exist in include/linux/mlx5/fs.h. Signed-off-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Update the list of the PCI supported devicesMeir Lichtinger2020-11-271-0/+1
| | | | | | | | | | | | | | | | | | Add the upcoming BlueField-3 device ID. Signed-off-by: Meir Lichtinger <meirl@nvidia.com> Reviewed-by: Eran Ben Elisha <eranbe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Avoid exposing driver internal command helpersParav Pandit2020-11-272-3/+4
| | | | | | | | | | | | | | | | | | mlx5 command init and cleanup routines are internal to mlx5_core driver. Hence, avoid exporting them and move their definition to mlx5_core driver's internal file mlx5_core.h Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Add misc4 to mlx5_ifc_fte_match_param_bitsMuhammad Sammar2020-11-271-1/+1
| | | | | | | | | | | | | | | | | | Add misc4 match params to enable matching on prog_sample_fields. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Check dr mask size against mlx5_match_param sizeMuhammad Sammar2020-11-273-3/+3
| | | | | | | | | | | | | | | | | | | | This is to allow passing misc4 match param from userspace when function like ib_flow_matcher_create is called. Signed-off-by: Muhammad Sammar <muhammads@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: Add sampler destination typeChris Mi2020-11-272-0/+6
| | | | | | | | | | | | | | | | | | The flow sampler object is a new destination type. Add a new member for the flow destination. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
* | RDMA/mlx5: Fix error unwinds for rereg_mrJason Gunthorpe2020-12-071-128/+188
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is all a giant train wreck of error handling, in many cases the MR is left in some corrupted state where continuing on is going to lead to chaos, or various unwinds/order is missed. rereg had three possible completely different actions, depending on flags and various details about the MR. Split the three actions into three functions, and call the right action from the start. For each action carefully design the error handling to fit the action: - UMR access/PD update is a simple UMR, if it fails the MR isn't changed, so do nothing - PAS update over UMR is multiple UMR operations. To keep everything sane revoke access to the MKey while it is being changed and restore it once the MR is correct. - Recreating the mkey should completely build a parallel MR with a fully loaded PAS then swap and destroy the old one. If it fails the original should be left untouched. This is handled in the core code. Directly call the normal MR creation functions, possibly re-using the existing umem. Add support for working with ODP MRs. The READ/WRITE access flags can be changed by UMR and we can trivially convert to/from ODP MRs using the logic to build a completely new MR. This new logic also fixes various problems with MRs continuing to work while their PAS lists are no longer valid, eg during a page size change. Link: https://lore.kernel.org/r/20201130075839.278575-6-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/mlx5: Reorganize mlx5_ib_reg_user_mr()Jason Gunthorpe2020-12-073-129/+140
| | | | | | | | | | | | | | | | | | | | This function handles an ODP and regular MR flow all mushed together, even though the two flows are quite different. Split them into two dedicated functions. Link: https://lore.kernel.org/r/20201130075839.278575-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/uverbs: Allow drivers to create a new HW object during rereg_mrJason Gunthorpe2020-12-078-52/+151
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mlx5 has an ugly flow where it tries to allocate a new MR and replace the existing MR in the same memory during rereg. This is very complicated and buggy. Instead of trying to replace in-place inside the driver, provide support from uverbs to change the entire HW object assigned to a handle during rereg_mr. Since destroying a MR is allowed to fail (ie if a MW is pointing at it) and can't be detected in advance, the algorithm creates a completely new uobject to hold the new MR and swaps the IDR entries of the two objects. The old MR in the temporary IDR entry is destroyed, and if it fails rereg_mr succeeds and destruction is deferred to FD release. This complexity is why this cannot live in a driver safely. Link: https://lore.kernel.org/r/20201130075839.278575-4-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/uverbs: Check ODP in ib_check_mr_access() as wellJason Gunthorpe2020-12-073-16/+7
| | | | | | | | | | | | | | | | | | No reason only one caller checks this. This properly blocks ODP from the rereg flow if the device does not support ODP. Link: https://lore.kernel.org/r/20201130075839.278575-3-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/uverbs: Tidy input validation of ib_uverbs_rereg_mr()Jason Gunthorpe2020-12-071-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | Unknown flags should be EOPNOTSUPP, only zero flags is EINVAL. Flags is actually the rereg action to perform. The checking of the start/hca_va/etc is also redundant and ib_umem_get() does these checks and returns proper error codes. Fixes: 7e6edb9b2e0b ("IB/core: Add user MR re-registration support") Link: https://lore.kernel.org/r/20201130075839.278575-2-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/efa: Use dma_set_mask_and_coherent() to simplify codeGal Pressman2020-12-021-9/+2
| | | | | | | | | | | | | | | | | | | | | | Use dma_set_mask_and_coherent() instead of pci_set_dma_mask() followed by a pci_set_consistent_dma_mask(). Link: https://lore.kernel.org/r/20201201091811.37984-1-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/hns: Refactor process of setting extended sgeWeihang Li2020-12-021-32/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | The variable 'cnt' is used to represent the max number of sge an SQ WQE can use at first, then it means how many extended sge an SQ has. In addition, this function has no need to return a value. So refactor and encapsulate the parts of getting number of extended sge a WQE can use to make it easier to understand. Link: https://lore.kernel.org/r/1606558959-48510-4-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/hns: Bugfix for calculation of extended sgeYangyang Li2020-12-021-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Page alignment is required when setting the number of extended sge according to the hardware's achivement. If the space of needed extended sge is greater than one page, the roundup_pow_of_two() can ensure that. But if the needed extended sge isn't 0 and can not be filled in a whole page, the driver should align it specifically. Fixes: 54d6638765b0 ("RDMA/hns: Optimize WQE buffer size calculating process") Link: https://lore.kernel.org/r/1606558959-48510-3-git-send-email-liweihang@huawei.com Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/hns: Fix 0-length sge calculation errorLang Cheng2020-12-021-14/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | One RC SQ WQE can store 2 sges but UD can't, so ignore 2 valid sges of wr.sglist for RC which have been filled in WQE before setting extended sge. Either of RC and UD can not contain 0-length sges, so these 0-length sges should be skipped. Fixes: 54d6638765b0 ("RDMA/hns: Optimize WQE buffer size calculating process") Link: https://lore.kernel.org/r/1606558959-48510-2-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | RDMA/i40iw: Remove push code from i40iwShiraz Saleem2020-12-028-224/+18
| | | | | | | | | | | | | | | | | | | | | | The push feature does not work as expected in x722 and has historically been disabled in the driver. Purge all remaining code related to the push feature in i40iw. Link: https://lore.kernel.org/r/20201125005616.1800-3-shiraz.saleem@intel.com Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
* | Merge tag 'v5.10-rc6' into rdma.git for-nextJason Gunthorpe2020-12-02172-1929/+4130
|\ \ | | | | | | | | | | | | | | | For dependencies in following patches Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
| * \ Merge tag 'locking-urgent-2020-11-29' of ↵Linus Torvalds2020-11-291-17/+20
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking fixes from Thomas Gleixner: "Two more places which invoke tracing from RCU disabled regions in the idle path. Similar to the entry path the low level idle functions have to be non-instrumentable" * tag 'locking-urgent-2020-11-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: intel_idle: Fix intel_idle() vs tracing sched/idle: Fix arch_cpu_idle() vs tracing
| | * | intel_idle: Fix intel_idle() vs tracingPeter Zijlstra2020-11-241-17/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpuidle->enter() callbacks should not call into tracing because RCU has already been disabled. Instead of doing the broadcast thing itself, simply advertise to the cpuidle core that those states stop the timer. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lkml.kernel.org/r/20201123143510.GR3021@hirez.programming.kicks-ass.net
| * | | Merge tag 'irq-urgent-2020-11-29' of ↵Linus Torvalds2020-11-292-14/+4
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq fixes from Thomas Gleixner: "Two fixes for irqchip drivers: - Save and restore the GICV3 ITS state unconditionally on suspend/resume to handle firmware which fails to do so. - Use the correct index into the fwspec parameters to read the irq trigger type in the EXIU chip driver" * tag 'irq-urgent-2020-11-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/gic-v3-its: Unconditionally save/restore the ITS state on suspend irqchip/exiu: Fix the index of fwspec for IRQ type
| | * \ \ Merge tag 'irqchip-fixes-5.10-2' of ↵Thomas Gleixner2020-11-252-14/+4
| | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/urgent Pull irqchip fixes from Marc Zyngier: - Fix Exiu driver trigger type when using ACPI - Fix GICv3 ITS suspend/resume to use the in-kernel path at all times, sidestepping braindead firmware support Link: https://lore.kernel.org/r/20201122184752.553990-1-maz@kernel.org