summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* mptcp: add the incoming RM_ADDR supportGeliang Tang2020-09-255-4/+66
| | | | | | | | | | | | | | | | | | | | | This patch added the RM_ADDR option parsing logic: We parsed the incoming options to find if the rm_addr option is received, and called mptcp_pm_rm_addr_received to schedule PM work to a new status, named MPTCP_PM_RM_ADDR_RECEIVED. PM work got this status, and called mptcp_pm_nl_rm_addr_received to handle it. In mptcp_pm_nl_rm_addr_received, we closed the subflow matching the rm_id, and updated PM counter. Suggested-by: Matthieu Baerts <matthieu.baerts@tessares.net> Suggested-by: Paolo Abeni <pabeni@redhat.com> Suggested-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Geliang Tang <geliangtang@gmail.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* mptcp: add the outgoing RM_ADDR supportGeliang Tang2020-09-253-0/+63
| | | | | | | | | | | | This patch added a new signal named rm_addr_signal in PM. On outgoing path, we called mptcp_pm_should_rm_signal to check if rm_addr_signal has been set. If it has been, we sent out the RM_ADDR option. Suggested-by: Matthieu Baerts <matthieu.baerts@tessares.net> Suggested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Geliang Tang <geliangtang@gmail.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* mptcp: rename addr_signal and the related functionsGeliang Tang2020-09-253-18/+18
| | | | | | | | | | | This patch renamed addr_signal and the related functions with the explicit word "add". Suggested-by: Matthieu Baerts <matthieu.baerts@tessares.net> Suggested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Geliang Tang <geliangtang@gmail.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge tag 'mlx5-updates-2020-09-22' of ↵David S. Miller2020-09-2521-1658/+2339
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2020-09-22 This series includes mlx5 updates 1) Add support for Connection Tracking offload in NIC mode. Supporting CT offload in NIC mode on Mellanox cards is useful for scenarios where the dual port NIC serves as a gateway between 2 networks and forwards traffic between these networks. Since the traffic is not terminated on the host in this case, no use of SRIOV VFs and/or switchdev mode is required. Today Mellanox NIC cards already support offloading of packet forwarding between physical ports without going to the host so combining it with CT offloading allows users to create a gateway with forwarding and CT (Including NAT) offloading capabilities in non-switchdev mode. To support connection tracking in non-Switchdev mode (Single NIC mode), we need to make use of the current Connection tracking infrastructure implemented on top of E-Switch and the mlx5 generic flow table chains APIs, to make it work on non-Eswitch steering domain e.g. NIC RX domain, the following was performed: 1.1) Refactor current flow steering chains infrastructure and updates TC nic mode implementation to use flow table chains. 1.2) Refactor current Connection Tracking (CT) infrastructure to not assume E-switch backend, and make the CT layer agnostic to underlying steering mode (E-Switch/NIC) 1.3) Plumbing to support CT offload in NIC mode. 2) Trivial code cleanups. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5: remove unreachable returnPavel Machek (CIP)2020-09-241-2/+0
| | | | | | | | | | | | | | | | The last return statement is unreachable code. I'm not sure if it will provoke any warnings, but it looks ugly. Signed-off-by: Pavel Machek (CIP) <pavel@denx.de> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5: simplify the return expression of mlx5_ec_init()Qinglang Miao2020-09-241-7/+1
| | | | | | | | | | | | | | Simplify the return expression. Signed-off-by: Qinglang Miao <miaoqinglang@huawei.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5e: Use kfree() to free fd->g in accel_fs_tcp_create_groups()Denis Efremov2020-09-241-1/+1
| | | | | | | | | | | | | | | | Memory ft->g in accel_fs_tcp_create_groups() is allocaed with kcalloc(). It's excessive to free ft->g with kvfree(). Use kfree() instead. Signed-off-by: Denis Efremov <efremov@linux.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5e: IPsec: Use kvfree() for memory allocated with kvzalloc()Denis Efremov2020-09-241-2/+2
| | | | | | | | | | | | | | | | | | | | Variables flow_group_in, spec in rx_fs_create() are allocated with kvzalloc(). It's incorrect to free them with kfree(). Use kvfree() instead. Fixes: 5e466345291a ("net/mlx5e: IPsec: Add IPsec steering in local NIC RX") Signed-off-by: Denis Efremov <efremov@linux.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5e: Keep direct reference to mlx5_core_dev in tc ctAriel Levkovich2020-09-241-19/+18
| | | | | | | | | | | | | | | | | | | | | | Keep and use a direct reference to the mlx5 core device in all of tc_ct code instead of accessing it via a pointer to mlx5 eswitch in order to support nic mode ct offload for VF devices that don't have a valid eswitch pointer set. Signed-off-by: Ariel Levkovich <lariel@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5e: TC: Remove unused parameter from mlx5_tc_ct_add_no_trk_match()Saeed Mahameed2020-09-243-9/+4
| | | | | | | | | | | | | | priv is never used in this function Fixes: 7e36feeb0467 ("net/mlx5e: CT: Don't offload tuple rewrites for established tuples") Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5e: CT: Use the same counter for both directionsOz Shlomo2020-09-241-9/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | A connection is represented by two 5-tuple entries, one for each direction. Currently, each direction allocates its own hw counter, which is inefficient as ct aging is managed per connection. Share the counter that was allocated for the original direction with the reverse direction. Signed-off-by: Oz Shlomo <ozsh@mellanox.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5e: Support CT offload for tc nic flowsAriel Levkovich2020-09-246-200/+348
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding support to perform CT related tc actions and matching on CT states for nic flows. The ct flows management and handling will be done using a new instance of the ct database that is declared in this patch to keep it separate from the eswitch ct flows database. Offloading and unoffloading ct flows will be done using the existing ct offload api by providing it the relevant ct database reference in each mode. In addition, refactoring the tc ct api is introduced to make it agnostic to the flow type and perform the resource allocations and rule insertion to the proper steering domain in the device. In the initialization call, the api requests and stores in the ct database instance all the relevant information that distinguishes between nic flows and esw flows, such as chains database, steering namespace and mod hdr table. This way the operations of adding and removing ct flows to the device can later performed agnostically to the flow type. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
| * net/mlx5e: rework ct offload init messagesAriel Levkovich2020-09-241-22/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | The changes are: - Use mlx5_core print macros instead of netdev_warn since netdev is not always initialized at that stage. - Print a warning message in case the issue is with lack of support for CT offload without indicating an error. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
| * net/mlx5e: Add tc chains offload support for nic flowsAriel Levkovich2020-09-244-57/+250
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow adding nic tc flow rules with goto chain action. Connecting the nic flows to the mlx5 chains infrastructure in previous patches allows us to support the creation of chained flow tables and rules that direct to another chain for further packet processing. This is a required preparation to support CT offloads for nic tc flows. We allow the creation of 256 different chains for nic flows since we have 8 bits available for the chain restore tag in case of a miss. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
| * net/mlx5: Refactor tc flow attributes structureAriel Levkovich2020-09-247-262/+376
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to support chains and connection tracking offload for nic flows, there's a need to introduce a common flow attributes struct so that these features can be agnostic and have access to a single attributes struct, regardless of the flow type. Therefore, a new tc flow attributes format is introduced to allow access to attributes that are common to eswitch and nic flows. The common attributes will always get allocated for the new flows, regardless of their type, while the type specific attributes are separated into different structs and will be allocated based on the flow type to avoid memory waste. When allocating the flow attributes the caller provides the flow steering namespace and according the namespace type the additional space for the extra, type specific, attributes is determined and added to the total attribute allocation size. In addition, the attributes that are going to be common to both flow types are moved to the common attributes struct. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
| * net/mlx5e: Split nic tc flow allocation and creationAriel Levkovich2020-09-242-46/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For future support of CT offload with nic tc flows, where the flow rule is not created immediately but rather following a future event, the patch is splitting the nic rule creation and deletion into 2 parts: 1. Creating/Deleting and setting the rule attributes. 2. Creating/Deleting the flow table and flow rule itself. This way the attributes can be prepared and stored in the flow handle when the tc flow is created but the rule can actually be created at any point in the future, using these pre allocated attributes. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
| * net/mlx5e: Tc nic flows to use mlx5_chains flow tablesAriel Levkovich2020-09-242-31/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change nic tc flows offload path to use the chains and prios infrastructure for the flow table creation as a preparation to support tc multi chains and priorities for nic flows. Adding an instance of the table chaining database to the nic tc struct and perform the root table creation and desctuction via the chains api while keeping the limit of a single chain (0) in nic tc mode. This will be extendable to supporting multiple chains in the following patches. The flow table sizes and default miss table parameters that are provided to the chains creation api are kept the same. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
| * net/mlx5: Allow ft level ignore for nic rx tablesAriel Levkovich2020-09-241-2/+3
| | | | | | | | | | | | | | | | | | | | | | Allow setting a flow table with a lower level as a rule destination in nic rx tables. This is required in order to support table chaining of tc nic flows. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
| * net/mlx5: Refactor multi chains and prios supportAriel Levkovich2020-09-2411-1066/+1174
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decouple the chains infrastructure from eswitch and make it generic to support other steering namespaces. The change defines an agnostic data structure to keep all the relevant information for maintaining flow table chaining in any steering namespace. Each namespace that requires table chaining will be required to allocate such data structure. The chains creation code will receive the steering namespace and flow table parameters from the caller so it will operate agnosticly when creating the required resources to maintain the table chaining function while Parts of the code that are relevant to eswitch specific functionality are moved to eswitch files. Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | Merge branch 'dpaa2-mac-add-PCS-support-through-the-Lynx-module'David S. Miller2020-09-256-10/+132
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ioana Ciornei says: ==================== dpaa2-mac: add PCS support through the Lynx module This patch set aims to add PCS support in the dpaa2-eth driver by leveraging the Lynx PCS module. The first two patches are some missing pieces: the first one adding support for 10GBASER in Lynx PCS while the second one adds a new function - of_mdio_find_device - which is helpful in retrieving the PCS represented as a mdio_device. The final patch adds the glue logic between phylink and the Lynx PCS module: it retrieves the PCS represented as an mdio_device and registers it to Lynx and phylink. From that point on, any PCS callbacks are treated by Lynx, without dpaa2-eth interaction. Changes in v2: - move put_device() after destroy - 3/3 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | dpaa2-mac: add PCS support through the Lynx moduleIoana Ciornei2020-09-253-1/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Include PCS support in the dpaa2-eth driver by integrating it with the new Lynx PCS module. There is not much to talk about in terms of changes needed in the dpaa2-eth driver since the only steps necessary are to find the MDIO device representing the PCS, register it to the Lynx PCS module and then let phylink know if its existence also. After this, the PCS callbacks will be treated directly by Lynx, without interraction from dpaa2-eth's part. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | of: add of_mdio_find_device() apiRussell King2020-09-252-9/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a helper function which finds the mdio_device structure given a device tree node. This is helpful for finding the PCS device based on a DTS node but managing it as a mdio_device instead of a phy_device. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: pcs-lynx: add support for 10GBASERIoana Ciornei2020-09-251-0/+6
|/ / | | | | | | | | | | | | | | | | Add support in the Lynx PCS module for the 10GBASE-R mode which is only used to get the link state, since it offers a single fixed speed. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: mscc: ocelot: always pass skb clone to ocelot_port_add_txtstamp_skbVladimir Oltean2020-09-255-29/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, ocelot switchdev passes the skb directly to the function that enqueues it to the list of skb's awaiting a TX timestamp. Whereas the felix DSA driver first clones the skb, then passes the clone to this queue. This matters because in the case of felix, the common IRQ handler, which is ocelot_get_txtstamp(), currently clones the clone, and frees the original clone. This is useless and can be simplified by using skb_complete_tx_timestamp() instead of skb_tstamp_tx(). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'net-dsa-b53-Configure-VLANs-while-not-filtering'David S. Miller2020-09-244-20/+81
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Florian Fainelli says: ==================== net: dsa: b53: Configure VLANs while not filtering These two patches allow the b53 driver which always configures its CPU port as egress tagged to behave correctly with VLANs being always configured whenever a port is added to a bridge. Vladimir provides a patch that aligns the bridge with vlan_filtering=0 receive path to behave the same as vlan_filtering=1. Per discussion with Nikolay, this behavior is deemed to be too DSA specific to be done in the bridge proper. This is a preliminary series for Vladimir to make configure_vlan_while_filtering the default behavior for all DSA drivers in the future. Thanks! Changes in v3: - added Vladimir's Acked-by tag to patch #2 - removed unnecessary if_vlan.h inclusion in patch #2 - reworded commit message to be accurate with the code changes Changes in v2: - moved the call to dsa_untag_bridge_pvid() into net/dsa/tag_brcm.c since we have a single user for now ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: dsa: b53: Configure VLANs while not filteringFlorian Fainelli2020-09-243-20/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update the B53 driver to support VLANs while not filtering. This requires us to enable VLAN globally within the switch upon driver initial configuration (dev->vlan_enabled). We also need to remove the code that dealt with PVID re-configuration in b53_vlan_filtering() since that function worked under the assumption that it would only be called to make a bridge VLAN filtering, or not filtering, and we would attempt to move the port's PVID accordingly. Now that VLANs are programmed all the time, even in the case of a non-VLAN filtering bridge, we would be programming a default_pvid for the bridged switch ports. We need the DSA receive path to pop the VLAN tag if it is the bridge's default_pvid because the CPU port is always programmed tagged in the programmed VLANs. In order to do so we utilize the dsa_untag_bridge_pvid() helper introduced in the commit before within net/dsa/tag_brcm.c. Acked-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: dsa: untag the bridge pvid from rx skbsVladimir Oltean2020-09-241-0/+66
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the bridge untags VLANs present in its VLAN groups in __allowed_ingress() only when VLAN filtering is enabled. But when a skb is seen on the RX path as tagged with the bridge's pvid, and that bridge has vlan_filtering=0, and there isn't any 8021q upper with that VLAN either, then we have a problem. The bridge will not untag it (since it is supposed to remain VLAN-unaware), and pvid-tagged communication will be broken. There are 2 situations where we can end up like that: 1. When installing a pvid in egress-tagged mode, like this: ip link add dev br0 type bridge vlan_filtering 0 ip link set swp0 master br0 bridge vlan del dev swp0 vid 1 bridge vlan add dev swp0 vid 1 pvid This happens because DSA configures the VLAN membership of the CPU port using the same flags as swp0 (in this case "pvid and not untagged"), in an attempt to copy the frame as-is from ingress to the CPU. However, in this case, the packet may arrive untagged on ingress, it will be pvid-tagged by the ingress port, and will be sent as egress-tagged towards the CPU. Otherwise stated, the CPU will see a VLAN tag where there was none to speak of on ingress. When vlan_filtering is 1, this is not a problem, as stated in the first paragraph, because __allowed_ingress() will pop it. But currently, when vlan_filtering is 0 and we have such a VLAN configuration, we need an 8021q upper (br0.1) to be able to ping over that VLAN, which is not symmetrical with the vlan_filtering=1 case, and therefore, confusing for users. Basically what DSA attempts to do is simply an approximation: try to copy the skb with (or without) the same VLAN all the way up to the CPU. But DSA drivers treat CPU port VLAN membership in various ways (which is a good segue into situation 2). And some of those drivers simply tell the CPU port to copy the frame unmodified, which is the golden standard when it comes to VLAN processing (therefore, any driver which can configure the hardware to do that, should do that, and discard the VLAN flags requested by DSA on the CPU port). 2. Some DSA drivers always configure the CPU port as egress-tagged, in an attempt to recover the classified VLAN from the skb. These drivers cannot work at all with untagged traffic when bridged in vlan_filtering=0 mode. And they can't go for the easy "just keep the pvid as egress-untagged towards the CPU" route, because each front port can have its own pvid, and that might require conflicting VLAN membership settings on the CPU port (swp1 is pvid for VID 1 and egress-tagged for VID 2; swp2 is egress-taggeed for VID 1 and pvid for VID 2; with this simplistic approach, the CPU port, which is really a separate hardware entity and has its own VLAN membership settings, would end up being egress-untagged in both VID 1 and VID 2, therefore losing the VLAN tags of ingress traffic). So the only thing we can do is to create a helper function for resolving the problematic case (that is, a function which untags the bridge pvid when that is in vlan_filtering=0 mode), which taggers in need should call. It isn't called from the generic DSA receive path because there are drivers that fall neither in the first nor second category. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'PHY-subsystem-kernel-doc'David S. Miller2020-09-245-135/+404
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Andrew Lunn says: ==================== PHY subsystem kernel doc The first patches fix existing warnings in the kerneldoc for the PHY subsystem, and then the 2nd extend the kernel documentation for the major structures and functions in the PHY subsystem. v2: Drop the other fixes which have already been merged. s/phy/PHY/g TBI TypOs ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: phy: Document core PHY structuresAndrew Lunn2020-09-244-133/+400
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add kerneldoc for the core PHY data structures, a few inline functions and exported functions which are not already documented. v2 Typos g/phy/PHY/s Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: phy: Fixup kernel docAndrew Lunn2020-09-242-2/+4
|/ / | | | | | | | | | | | | | | Add missing parameter documentation, or fixup wrong parameter names. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'net-dsa-bcm_sf2-Additional-DT-changes'David S. Miller2020-09-241-1/+12
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Florian Fainelli says: ==================== net: dsa: bcm_sf2: Additional DT changes This patch series includes some additional changes to the bcm_sf2 in order to support the Device Tree firmwares provided on such platforms. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: dsa: bcm_sf2: Include address 0 for MDIO diversionFlorian Fainelli2020-09-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | We need to include MDIO address 0, which is how our Device Tree blobs indicate where to find the external BCM53125 switches. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: dsa: bcm_sf2: Disallow port 5 to be a DSA CPU portFlorian Fainelli2020-09-241-0/+11
|/ / | | | | | | | | | | | | | | | | | | | | | | | | While the switch driver is written such that port 5 or 8 could be CPU ports, the use case on Broadcom STB chips is to use port 8 exclusively. The platform firmware does make port 5 comply to a proper DSA CPU port binding by specifiying an "ethernet" phandle. This is undesirable for now until we have an user-space configuration mechanism (such as devlink) which could support dynamically changing the port flavor at run time. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'octeontx2-Add-support-for-VLAN-based-flow-distribution'David S. Miller2020-09-244-1/+17
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | George Cherian says: ==================== octeontx2: Add support for VLAN based flow distribution This series add support for VLAN based flow distribution for octeontx2 netdev driver. This adds support for configuring the same via ethtool. Following tests have been done. - Multi VLAN flow with same SD - Multi VLAN flow with same SDFN - Single VLAN flow with multi SD - Single VLAN flow with multi SDFN All tests done for udp/tcp both v4 and v6 ==================== Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | octeontx2-pf: Support to change VLAN based RSS hash options via ethtoolGeorge Cherian2020-09-242-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support to control rx-flow-hash based on VLAN. By default VLAN plus 4-tuple based hashing is enabled. Changes can be done runtime using ethtool To enable 2-tuple plus VLAN based flow distribution # ethtool -N <intf> rx-flow-hash <prot> sdv To enable 4-tuple plus VLAN based flow distribution # ethtool -N <intf> rx-flow-hash <prot> sdfnv Signed-off-by: George Cherian <george.cherian@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | octeontx2-af: Add support for VLAN based RSS hashingGeorge Cherian2020-09-242-0/+9
|/ / | | | | | | | | | | | | | | | | Added support for PF/VF drivers to choose RSS flow key algorithm with VLAN tag included in hashing input data. Only CTAG is considered. Signed-off-by: George Cherian <george.cherian@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: fix a new kernel-doc warning at dev.cMauro Carvalho Chehab2020-09-241-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | kernel-doc expects the function prototype to be just after the kernel-doc markup, as otherwise it will get it all wrong: ./net/core/dev.c:10036: warning: Excess function parameter 'dev' description in 'WAIT_REFS_MIN_MSECS' Fixes: 0e4be9e57e8c ("net: use exponential backoff in netdev_wait_allrefs") Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Reviewed-by: Francesco Ruggeri <fruggeri@arista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'net-mdio-ipq4019-add-Clause-45-support'David S. Miller2020-09-241-17/+92
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Robert Marko says: ==================== net: mdio-ipq4019: add Clause 45 support This patch series adds support for Clause 45 to the driver. While at it also change some defines to upper case to match rest of the driver. Changes since v4: * Rebase onto net-next.git Changes since v1: * Drop clock patches, these need further investigation and no user for non default configuration has been found ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: mdio-ipq4019: add Clause 45 supportRobert Marko2020-09-241-14/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While up-streaming the IPQ4019 driver it was thought that the controller had no Clause 45 support, but it actually does and its activated by writing a bit to the mode register. So lets add it as newer SoC-s use the same controller and Clause 45 compliant PHY-s. Signed-off-by: Robert Marko <robert.marko@sartura.hr> Cc: Luka Perkov <luka.perkov@sartura.hr> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: mdio-ipq4019: change defines to upper caseRobert Marko2020-09-241-3/+3
|/ / | | | | | | | | | | | | | | | | | | | | In the commit adding the IPQ4019 MDIO driver, defines for timeout and sleep partially used lower case. Lets change it to upper case in line with the rest of driver defines. Signed-off-by: Robert Marko <robert.marko@sartura.hr> Cc: Luka Perkov <luka.perkov@sartura.hr> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'Introduce-mbox-tracepoints-for-Octeontx2'David S. Miller2020-09-249-2/+146
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Subbaraya Sundeep says: ==================== Introduce mbox tracepoints for Octeontx2 This patchset adds tracepoints support for mailbox. In Octeontx2, PFs and VFs need to communicate with AF for allocating and freeing resources. Once all the configuration is done by AF for a PF/VF then packet I/O can happen on PF/VF queues. When an interface is brought up many mailbox messages are sent to AF for initializing queues. Say a VF is brought up then each message is sent to PF and PF forwards to AF and response also traverses from AF to PF and then VF. To aid debugging, tracepoints are added at places where messages are allocated, sent and message interrupts. Below is the trace of one of the messages from VF to AF and AF response back to VF: ~ # echo 1 > /sys/kernel/tracing/events/rvu/enable ~ # ifconfig eth20 up [ 279.379559] eth20 NIC Link is UP 10000 Mbps Full duplex ~ # cat /sys/kernel/tracing/trace ifconfig-171 [000] .... 275.753345: otx2_msg_alloc: [0002:02:00.1] msg:(0x400) size:40 ifconfig-171 [000] ...1 275.753347: otx2_msg_send: [0002:02:00.1] sent 1 msg(s) of size:48 <idle>-0 [001] dNh1 275.753356: otx2_msg_interrupt: [0002:02:00.0] mbox interrupt VF(s) to PF (0x1) kworker/u9:1-90 [001] ...1 275.753364: otx2_msg_send: [0002:02:00.0] sent 1 msg(s) of size:48 kworker/u9:1-90 [001] d.h. 275.753367: otx2_msg_interrupt: [0002:01:00.0] mbox interrupt PF(s) to AF (0x2) kworker/u9:2-167 [002] .... 275.753535: otx2_msg_process: [0002:01:00.0] msg:(0x400) error:0 kworker/u9:2-167 [002] ...1 275.753537: otx2_msg_send: [0002:01:00.0] sent 1 msg(s) of size:32 <idle>-0 [003] d.h1 275.753543: otx2_msg_interrupt: [0002:02:00.0] mbox interrupt AF to PF (0x1) <idle>-0 [001] d.h2 275.754376: otx2_msg_interrupt: [0002:02:00.1] mbox interrupt PF to VF (0x1) v3 changes: Removed EXPORT_TRACEPOINT_SYMBOLS of otx2_msg_send and otx2_msg_check since they are called locally only v2 changes: Removed otx2_msg_err tracepoint since it is similar to devlink_hwerr and it will be used instead when devlink supported is added. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | octeontx2-pf: Add tracepoints for PF/VF mailboxSubbaraya Sundeep2020-09-243-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With tracepoints support present in the mailbox code this patch adds tracepoints in PF and VF drivers at places where mailbox messages are allocated, sent and at message interrupts. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | octeontx2-af: Introduce tracepoints for mailboxSubbaraya Sundeep2020-09-246-2/+136
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added tracepoints in mailbox code so that the mailbox operations like message allocation, sending message and message interrupts are traced. Also the mailbox errors occurred like timeout or wrong responses are traced. These will help in debugging mailbox issues. Here's an example output showing one of the mailbox messages sent by PF to AF and AF responding to it: ~# mount -t tracefs none /sys/kernel/tracing/ ~# echo 1 > /sys/kernel/tracing/events/rvu/enable ~# ifconfig eth0 up ~# cat /sys/kernel/tracing/trace ~# cat /sys/kernel/tracing/trace tracer: nop _-----=> irqs-off / _----=> need-resched | / _---=> hardirq/softirq || / _--=> preempt-depth ||| / delay TASK-PID CPU# |||| TIMESTAMP FUNCTION | | | |||| | | ifconfig-2382 [002] .... 756.161892: otx2_msg_alloc: [0002:02:00.0] msg:(0x400) size:40 ifconfig-2382 [002] ...1 756.161895: otx2_msg_send: [0002:02:00.0] sent 1 msg(s) of size:48 <idle>-0 [000] d.h1 756.161902: otx2_msg_interrupt: [0002:01:00.0] mbox interrupt PF(s) to AF (0x2) kworker/u49:0-1165 [000] .... 756.162049: otx2_msg_process: [0002:01:00.0] msg:(0x400) error:0 kworker/u49:0-1165 [000] ...1 756.162051: otx2_msg_send: [0002:01:00.0] sent 1 msg(s) of size:32 kworker/u49:0-1165 [000] d.h. 756.162056: otx2_msg_interrupt: [0002:02:00.0] mbox interrupt AF to PF (0x1) Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: allwinner: remove redundant irqsave and irqrestore in hardIRQBarry Song2020-09-241-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | The comment "holders of db->lock must always block IRQs" and related code to do irqsave and irqrestore don't make sense since we are in a IRQ-disabled hardIRQ context. Cc: Maxime Ripard <mripard@kernel.org> Cc: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Acked-by: Maxime Ripard <mripard@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: hns3: Constify static structsRikard Falkeborn2020-09-242-18/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A number of static variables were not modified. Make them const to allow the compiler to put them in read-only memory. In order to do so, constify a couple of input pointers as well as some local pointers. This moves about 35Kb to read-only memory as seen by the output of the size command. Before: text data bss dec hex filename 404938 111534 640 517112 7e3f8 drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.ko After: text data bss dec hex filename 439499 76974 640 517113 7e3f9 drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.ko Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'net-bridge-mcast-IGMPv3-MLDv2-fast-path-part-2'David S. Miller2020-09-237-238/+916
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nikolay Aleksandrov says: ==================== net: bridge: mcast: IGMPv3/MLDv2 fast-path (part 2) This is the second part of the IGMPv3/MLDv2 support which adds support for the fast-path. In order to be able to handle source entries we add mdb support for S,G entries (i.e. we add source address support to br_ip), that requires to extend the current mdb netlink API, fortunately we just add another attribute which will contain nested future mdb attributes, then we use it to add support for S,G user- add, del and dump. The lookup sequence is simple: when IGMPv3/MLDv2 are enabled do the S,G lookup first and if it fails fallback to *,G. The more complex part is when we begin handling source lists and auto-installing S,G entries and *,G filter mode transitions. We have the following cases: 1) *,G INCLUDE -> EXCLUDE transition: we need to install the port in all of *,G's installed S,G entries for proper replication (except the ones explicitly blocked), this is also necessary when adding a new *,G EXCLUDE port group 2) *,G EXCLUDE -> INCLUDE transition: we need to remove the port from all of *,G's installed S,G entries, this is also necessary when removing a *,G port group 3) New S,G port entry: we need to install all current *,G EXCLUDE ports 4) Remove S,G port entry: if all other port groups were auto-installed we can safely remove them and delete the whole S,G entry Currently we compute these operations from the available ports, their source lists and their filter mode. In the future we can extend the port group structure and reduce the running time of these ops. Also one current limitation is that host-joined S,G entries are not supported. I.e. one cannot add "dev bridge port bridge" mdb S,G entries. The host join is currently considered an EXCLUDE {} join, so it's reflected in all of *,G's installed S,G entries. If an S,G,port entry is added as temporary then the kernel can take it over if a source shows up from a report, permanent entries are skipped. In order to properly handle blocked sources we add a new port group blocked flag to avoid forwarding to that port group in the S,G. Finally when forwarding we use the port group filter mode (if it's INCLUDE and the port group is from a *,G then don't replicate to it, respectively if it's EXCLUDE then forward) and the blocked flag (obviously if it's set - skip that port unless it's a router port) to decide if the port should be skipped. Another limitation is that we can't do some of the above transitions without small traffic drop while installing/removing entries. That will be taken care of when we add atomic swap of port replication lists later. Patch break down: patches 1-3: prepare the mdb code for better extack support which is used in future patches to return a more meaningful error patches 4-6: add the source address field to struct br_ip, and do minor cleanups around it patches 7-8: extend the mdb netlink API so we can send new mdb attributes and uses the new API for S,G entry add/del/dump support patch 9: takes care of S,G entries when doing a lookup (first S,G then *,G lookup) patch 10: adds a new port group field and attribute for origin protocol we use the already available RTPROT_ definitions, currently user-space entries are added as RTPROT_STATIC and kernel entries are added as RTPROT_KERNEL, we may allow user-space to set custom values later (e.g. for FRR, clag) patch 11: adds an internal S,G,port rhashtable to speed up filter mode transitions patch 12: initial automatic install of S,G entries based on port groups' source lists patch 13: handles port group modes on transitions or when new port group entries are added patch 14: self-explanatory - adds support for blocked port group entries needed to stop forwarding to particular S,G,port entries patch 15: handles host-join/leave state changes, treats host-joins as EXCLUDE {} groups (reflected in all *,G's S,G entries) patch 16: finally adds the fast-path filter mode and block flag support Here're the sets that will come next (in order): - iproute2 support for IGMPv3/MLDv2 - selftests for all mode transitions and group flags - explicit host tracking for proper fast-leave support - atomic port replication lists (these are also needed for broadcast forwarding optimizations) - mode transition optimization and removal of open-coded sorted lists Not implemented yet: - Host IGMPv3/MLDv2 filter support (currently we handle only join/leave as before) - Proper other querier source timer and value updates - IGMPv3/v2 MLDv2/v1 compat (I have a few rough patches for this one) v2: fix build with CONFIG_BATMAN_ADV_MCAST in patch 6 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: bridge: mcast: when forwarding handle filter mode and blocked flagNikolay Aleksandrov2020-09-231-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | We need to avoid forwarding to ports in MCAST_INCLUDE filter mode when the mdst entry is a *,G or when the port has the blocked flag. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: bridge: mcast: handle host stateNikolay Aleksandrov2020-09-231-0/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since host joins are considered as EXCLUDE {} joins we need to reflect that in all of *,G ports' S,G entries. Since the S,Gs can have host_joined == true only set automatically we can safely set it to false when removing all automatically added entries upon S,G delete. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: bridge: mcast: add support for blocked port groupsNikolay Aleksandrov2020-09-234-6/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When excluding S,G entries we need a way to block a particular S,G,port. The new port group flag is managed based on the source's timer as per RFCs 3376 and 3810. When a source expires and its port group is in EXCLUDE mode, it will be blocked. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: bridge: mcast: handle port group filter modesNikolay Aleksandrov2020-09-234-2/+216
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to handle group filter mode transitions and initial state. To change a port group's INCLUDE -> EXCLUDE mode (or when we have added a new port group in EXCLUDE mode) we need to add that port to all of *,G ports' S,G entries for proper replication. When the EXCLUDE state is changed from IGMPv3 report, br_multicast_fwd_filter_exclude() must be called after the source list processing because the assumption is that all of the group's S,G entries will be created before transitioning to EXCLUDE mode, i.e. most importantly its blocked entries will already be added so it will not get automatically added to them. The transition EXCLUDE -> INCLUDE happens only when a port group timer expires, it requires us to remove that port from all of *,G ports' S,G entries where it was automatically added previously. Finally when we are adding a new S,G entry we must add all of *,G's EXCLUDE ports to it. In order to distinguish automatically added *,G EXCLUDE ports we have a new port group flag - MDB_PG_FLAGS_STAR_EXCL. Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>