summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* epoll: Add busy poll support to epoll with socket fds.Sridhar Samudrala2017-03-251-0/+93
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds busy poll support to epoll. The implementation is meant to be opportunistic in that it will take the NAPI ID from the last socket that is added to the ready list that contains a valid NAPI ID and it will use that for busy polling until the ready list goes empty. Once the ready list goes empty the NAPI ID is reset and busy polling is disabled until a new socket is added to the ready list. In addition when we insert a new socket into the epoll we record the NAPI ID and assume we are going to receive events on it. If that doesn't occur it will be evicted as the active NAPI ID and we will resume normal behavior. An application can use SO_INCOMING_CPU or SO_REUSEPORT_ATTACH_C/EBPF socket options to spread the incoming connections to specific worker threads based on the incoming queue. This enables epoll for each worker thread to have only sockets that receive packets from a single queue. So when an application calls epoll_wait() and there are no events available to report, busy polling is done on the associated queue to pull the packets. Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Commonize busy polling code to focus on napi_id instead of socketSridhar Samudrala2017-03-253-18/+34
| | | | | | | | | | | | Move the core functionality in sk_busy_loop() to napi_busy_loop() and make it independent of sk. This enables re-using this function in epoll busy loop implementation. Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Track start of busy loop instead of when it should endAlexander Duyck2017-03-253-41/+49
| | | | | | | | | | | | | This patch flips the logic we were using to determine if the busy polling has timed out. The main motivation for this is that we will need to support two different possible timeout values in the future and by recording the start time rather than when we would want to end we can focus on making the end_time specific to the task be it epoll or socket based polling. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Change return type of sk_busy_loop from bool to voidAlexander Duyck2017-03-254-22/+25
| | | | | | | | | | | checking the return value of sk_busy_loop. As there are only a few consumers of that data, and the data being checked for can be replaced with a check for !skb_queue_empty() we might as well just pull the code out of sk_busy_loop and place it in the spots that actually need it. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Only define skb_mark_napi_id in one spot instead of twoAlexander Duyck2017-03-251-13/+9
| | | | | | | | | | | | Instead of defining two versions of skb_mark_napi_id I think it is more readable to just match the format of the sk_mark_napi_id functions and just wrap the contents of the function instead of defining two versions of the function. This way we can save a few lines of code since we only need 2 of the ifdef/endif but needed 5 for the extra function declaration. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp: Record Rx hash and NAPI ID in tcp_child_processAlexander Duyck2017-03-253-4/+4
| | | | | | | | | | | | | | | | While working on some recent busy poll changes we found that child sockets were being instantiated without NAPI ID being set. In our first attempt to fix it, it was suggested that we should just pull programming the NAPI ID into the function itself since all callers will need to have it set. In addition to the NAPI ID change I have dropped the code that was populating the Rx hash since it was actually being populated in tcp_get_cookie_sock. Reported-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Busy polling should ignore sender CPUsAlexander Duyck2017-03-252-6/+16
| | | | | | | | | | | | | | | This patch is a cleanup/fix for NAPI IDs following the changes that made it so that sender_cpu and napi_id were doing a better job of sharing the same location in the sk_buff. One issue I found is that we weren't validating the napi_id as being valid before we started trying to setup the busy polling. This change corrects that by using the MIN_NAPI_ID value that is now used in both allocating the NAPI IDs, as well as validating them. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'mlx5-xdp-perf-optimizations'David S. Miller2017-03-257-598/+716
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saeed Mahameed says: ==================== Mellanox mlx5e XDP performance optimization This series provides some preformancee optimizations for mlx5e driver, especially for XDP TX flows. 1st patch is a simple change of rmb to dma_rmb in CQE fetch routine which shows a huge gain for both RX and TX packet rates. 2nd patch removes write combining logic from the driver TX handler and simplifies the TX logic while improving TX CPU utilization. All other patches combined provide some refactoring to the driver TX flows to allow some significant XDP TX improvements. More details and performance numbers per patch can be found in each patch commit message compared to the preceding patch. Overall performance improvemnets System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Test case Baseline Now improvement --------------------------------------------------------------- TX packets (24 threads) 45Mpps 54Mpps 20% TC stack Drop (1 core) 3.45Mpps 3.6Mpps 5% XDP Drop (1 core) 14Mpps 16.9Mpps 20% XDP TX (1 core) 10.4Mpps 13.7Mpps 31% ==================== Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Different SQ typesSaeed Mahameed2017-03-255-256/+392
| | | | | | | | | | | | | | | | | | | | | | | | | | Different SQ types (tx, xdp, ico) are growing apart, we separate them and remove unwanted parts in each one of them, to simplify data path and utilize data cache. Remove DB union from SQ structures since it is not needed anymore as we now have different SQ data type for each SQ. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Generalize SQ create/modify/destroy functionsSaeed Mahameed2017-03-251-42/+69
| | | | | | | | | | | | | | | | | | | | In the next patches we will introduce different SQ types, and we would want to reuse those functions, in this patch we make them agnostic to SQ type (txq, xdp, ico). Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Proper names for SQ/RQ/CQ functionsSaeed Mahameed2017-03-251-63/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename mlx5e_{create,destroy}_{sq,rq,cq} to mlx5e_{alloc,free}_{sq,rq,cq}. Rename mlx5e_{enable,disable}_{sq,rq,cq} to mlx5e_{create,destroy}_{sq,rq,cq}. mlx5e_{enable,disable}_{sq,rq,cq} used to actually create/destroy the SQ in FW, so we rename them to align the functions names with FW semantics. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Generalize tx helper functions for different SQ typesSaeed Mahameed2017-03-254-47/+48
| | | | | | | | | | | | | | | | | | | | In the next patches we will introduce different SQ types, for that we here generalize some TX helper functions to work with more basic SQ parameters, in order to re-use them for the different SQ types. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Optimize XDP frame xmitSaeed Mahameed2017-03-253-42/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | XDP SQ has a fixed size WQE (MLX5E_XDP_TX_WQEBBS = 1) and only posts one kind of WQE (MLX5_OPCODE_SEND), Also we initialize SQ descriptors static fields once on open_xdpsq, rather than every time on critical path. Optimize the code in light of those facts and add a prefetch of the TX descriptor first thing in the xdp xmit function. Performance improvement: System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Test case Before Now improvement --------------------------------------------------------------- XDP TX (1 core) 13Mpps 13.7Mpps 5% Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Poll XDP TX CQ before RX CQSaeed Mahameed2017-03-251-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Handle XDP TX completions before handling RX packets, to make sure more free space is available for XDP TX packets a moment before handling RX packets. Performance improvement: System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Test case Before Now improvement --------------------------------------------------------------- XDP Drop (1 core) 16.9Mpps 16.9Mpps No change XDP TX (1 core) 12Mpps 13Mpps 8% Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Move XDP SQ instance into RQSaeed Mahameed2017-03-254-15/+21
| | | | | | | | | | | | | | | | | | To save many rq->channel->sq dereferences in fast-path. And rename it to xdpsq. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Move mlx5e_rq struct declarationSaeed Mahameed2017-03-251-108/+105
| | | | | | | | | | | | | | | | | | | | | | | | Move struct mlx5e_rq and friends to appear after mlx5e_sq declaration in en.h. We will need this for next patch to move the mlx5e_sq instance into mlx5e_rq struct for XDP SQs. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Move XDP completion functions to rx fileSaeed Mahameed2017-03-254-84/+86
| | | | | | | | | | | | | | | | | | | | | | XDP code belongs to RX path, move mlx5e_poll_xdp_tx_cq and mlx5e_free_xdp_tx_descs to en_rx.c. Rename them to mlx5e_poll_xdpsq_cq and mlx5e_free_xdpsq_descs. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Single bfreg (UAR) for all mlx5e SQs and netdevsSaeed Mahameed2017-03-254-15/+13
| | | | | | | | | | | | | | | | | | | | One is sufficient since Blue Flame is not supported anymore. This will also come in handy for switchdev mode to save resources, since VF representors will use same single UAR as well for their own SQs. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Xmit, no write combiningSaeed Mahameed2017-03-254-63/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mlx5e netdev Blue Flame (write combining) support demands a lot of overhead for a little latency gain for some special cases, this overhead is hurting the common case. Here we remove xmit Blue Flame support by creating all bfregs with no write combining for all SQs, and we remove a lot of BF logic and conditions from xmit data path. Simplify mlx5e_tx_notify_hw (doorbell function) by removing BF related code and by removing one memory barrier needed for WC mapped SQ doorbell buffers, which no longer exist. Performance improvement: System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Test case Before Now improvement --------------------------------------------------------------- TX packets (24 threads) 50Mpps 54Mpps 8% Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Use dma_rmb rather than rmb in CQE fetch routineSaeed Mahameed2017-03-251-1/+1
|/ | | | | | | | | | | | | | | | | | | | Use dma_rmb in mlx5e_get_cqe rather than aggressive rmb (at least on some architectures), this should help improve the performance on such CPU archs where dma_rmb is optimized. Performance improvement: System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Test case Baseline Now improvement --------------------------------------------------------------- TX packets (24 threads) 45Mpps 50Mpps 11% TC stack Drop (1 core) 3.45Mpps 3.6Mpps 5% XDP Drop (1 core) 14Mpps 16.9Mpps 20% XDP TX (1 core) 10.4Mpps 12Mpps 15% Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: dsa: bcm_sf2: Add missing OF_MDIO dependencyFlorian Fainelli2017-03-241-1/+1
| | | | | | | | | | | bcm_sf2 does require the MDIO_BCM_UNIMAC driver which is now dependent on OF_MDIO but also internally uses of_mdio.c provided routines which are guarted with OF_MDIO. Reported-by: kbuild test robot <fengguang.wu@intel.com> Fixes: 90eff9096c01 ("net: phy: Allow splitting MDIO bus/device support from PHYs") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'ipv6-sr-perf-improvements'David S. Miller2017-03-241-3/+28
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | David Lebrun says: ==================== Performances improvement for IPv6 Segment Routing This patch series improves the performances of IPv6 SR by optimizing skb head reallocation and extending the use of dst_cache. The overall performances improve by 35%. Before patch series (SRH encap): Result: OK: 7348320(c7347271+d1048) usec, 5000000 (1000byte,0frags) 680427pps 5443Mb/sec (5443416000bps) errors: 0 After patch series (SRH encap): Result: OK: 4774543(c4774084+d459) usec, 5000000 (1000byte,0frags) 1047220pps 8377Mb/sec (8377760000bps) errors: 0 Baseline for plain IPv6 forwarding: Result: OK: 4244144(c4243722+d422) usec, 5000000 (1000byte,0frags) 1178093pps 9424Mb/sec (9424744000bps) errors: 0 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * ipv6: sr: use dst_cache in seg6_inputDavid Lebrun2017-03-241-1/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We already use dst_cache in seg6_output, when handling locally generated packets. We extend it in seg6_input, to also handle forwarded packets, and avoid unnecessary fib lookups. Performances for SRH encapsulation before the patch: Result: OK: 5656067(c5655678+d388) usec, 5000000 (1000byte,0frags) 884006pps 7072Mb/sec (7072048000bps) errors: 0 Performances after the patch: Result: OK: 4774543(c4774084+d459) usec, 5000000 (1000byte,0frags) 1047220pps 8377Mb/sec (8377760000bps) errors: 0 Signed-off-by: David Lebrun <david.lebrun@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * ipv6: sr: expand skb head only if necessaryDavid Lebrun2017-03-241-2/+2
|/ | | | | | | | | | | | | | | | | | | | | | | To insert or encapsulate a packet with an SRH, we need a large enough skb headroom. Currently, we are using pskb_expand_head to inconditionally increase the size of the headroom by the amount needed by the SRH (and IPv6 header). If this reallocation is performed by another CPU than the one that initially allocated the skb, then when the initial CPU kfree the skb, it will enter the __slab_free slowpath, impacting performances. This patch replaces pskb_expand_head with skb_cow_head, that will reallocate the skb head only if the headroom is not large enough. Performances for SRH encapsulation before the patch: Result: OK: 7348320(c7347271+d1048) usec, 5000000 (1000byte,0frags) 680427pps 5443Mb/sec (5443416000bps) errors: 0 Performances after the patch: Result: OK: 5656067(c5655678+d388) usec, 5000000 (1000byte,0frags) 884006pps 7072Mb/sec (7072048000bps) errors: 0 Signed-off-by: David Lebrun <david.lebrun@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net>
* net_sched: use setup_deferrable_timerGeliang Tang2017-03-242-6/+4
| | | | | | | | Use setup_deferrable_timer() instead of init_timer_deferrable() to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'mlxsw-query-resources'David S. Miller2017-03-2410-154/+314
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jiri Pirko says: ==================== mlxsw: Query resources from firmware Ido says: Some parts of the driver already use the resource query mechanism, but in other parts we still rely on hard coded values that may change over time. This patchset removes most of these remaining values and queries them from the firmware instead. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * mlxsw: spectrum: Query cell size from firmwareIdo Schimmel2017-03-244-83/+119
| | | | | | | | | | | | | | | | | | As explained in the previous patch, the cell size may change in future devices, so query it from the firmware instead of hard coding it. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * mlxsw: spectrum: Refactor port buffer configurationIdo Schimmel2017-03-242-23/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sizes and thresholds of the priority group (PG) buffers are configured in cells, which represent a specific amount of bytes. The cell size can vary in different devices, so it's better to query it from the firmware than hard coding it. Refactor the code dealing with this value into different functions, so that it will be easier to make the conversion in the next patch. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * mlxsw: spectrum_buffers: Query shared buffer size from firmwareIdo Schimmel2017-03-241-4/+6
| | | | | | | | | | | | | | | | | | Instead of hard coding the size of the shared buffer in the driver, query it from the firmware, as it may change in future devices. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * mlxsw: Query maximum number of ports from firmwareIdo Schimmel2017-03-249-45/+125
| | | | | | | | | | | | | | | | | | | | | | | | | | We currently hard code the maximum number of ports in the driver, but this may change in future devices, so query it from the firmware instead. Fallback to a maximum of 64 ports in case this number can't be queried. This should only happen in SwitchX-2 for which this number is correct. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * mlxsw: spectrum_router: Query number of LPM trees from firmwareIdo Schimmel2017-03-243-13/+41
|/ | | | | | | | | Instead of hard coding the number of LPM trees in the driver, query it from the firmware, as it may change in future devices. Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch '40GbE' of ↵David S. Miller2017-03-245-44/+1294
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue Jeff Kirsher says: ==================== 40GbE Intel Wired LAN Driver Updates 2017-03-23 This series contains updates to i40e and i40e.txt documentation. Jake provides all the changes in the series which are centered around ntuple filter fixes and additional support. Fixed the current implementation of .set_rxnfc, where we were not reading the mask field for filter entries which was resulting in filters not behaving as expected and not working correctly. When cleaning up after disabling flow director support, ensure that the default input set is correctly reprogrammed. Since the hardware only supports a single input set for all flows of that type, the driver shall only allow the input set to change if there are no other configured filters for that flow type, so add support to detect when we can update the input set for each flow type. Align the driver to other drivers to partition the ring_cookie value into 8bits of VF index, along with 32bits of queue number instead of using the user-def field. Added support to parse the user-def field into a data structure format to allow future extensions of the user-def filed by keeping all the code that read/writes the field into a single location. Added support for flexible payloads passed via ethtool user-def field. We support a single flexible word (2byte) value per protocol type, and we handle the FLX_PIT register using a list of flexible entries so that each flow type may be configured separately. Enabled flow director filters for SCTPv4 packets using the ethtool ntuple interface to enable filters. Updated the documentation on the i40e driver to include the newly added support to ntuple filters. Reduced complexity of a if-continue-else-break section of code by taking advantage of using hlist_for_each_entry_continue() instead. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * i40e: make use of hlist_for_each_entry_continueJacob Keller2017-03-241-11/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace a complex if->continue->else->break construction in i40e_next_filter. We can simply use hlist_for_each_entry_continue instead. This drops a lot of confusing code. The resulting code is much easier to understand the intention, and follows the more normal pattern for using hlist loops. We could have also used a break with a "return next" at the end of the function, instead of return NULL, but the current implementation is explicitly clear that when you reach the end of the loop you get a NULL value. The alternative construction is less clear since the reader would have to know that next is NULL at the end of the loop. Change-Id: Ife74ca451dd79d7f0d93c672bd42092d324d4a03 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: document drivers use of ntuple filtersJacob Keller2017-03-241-0/+72
| | | | | | | | | | | | | | | | | | | | Add documentation describing the drivers use of ethtool ntuple filters, including the limitations that it has due to hardware, as well as how it reads and parses the user-def data block. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: add support for SCTPv4 FDir filtersJacob Keller2017-03-244-0/+93
| | | | | | | | | | | | | | | | | | | | | | Enable FDir filters for SCTPv4 packets using the ethtool ntuple interface to enable filters. The ethtool API does not allow masking on the verification tag. Change-Id: I093e88a8143994c7e6f4b7b17a0bd5cf861d18e4 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: implement support for flexible word payloadJacob Keller2017-03-244-12/+613
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for flexible payloads passed via ethtool user-def field. This support is somewhat limited due to hardware design. The input set can only be programmed once per filter type, and the flexible offset is part of this filter input set. This means that the user cannot program both a regular and a flexible filter at the same time for a given flow type. Additionally, the user may not program two flexible filters of the same flow type with different offsets, although they are allowed to configure different values at that offset location. We support a single flexible word (2byte) value per protocol type, and we handle the FLX_PIT register using a list of flexible entries so that each flow type may be configured separately. Due to hardware implementation, the flexible data is offset from the start of the packet payload, and thus may not be in part of the header data. For this reason, the offset provided by the user defined data is interpreted as a byte offset from the start of the matching payload. Previous implementations have tried to represent the offset as from the start of the frame, but this is not feasible because header sizes may change due to options. Change-Id: 36ed27995e97de63f9aea5ade5778ff038d6f811 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: add parsing of flexible filter fields from userdefJacob Keller2017-03-242-1/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add code to parse the user-def field into a data structure format. This code is intended to allow future extensions of the user-def field by keeping all code that actually reads and writes the field into a single location. This ensures that we do not litter the driver with references to the user-def field and minimizes the amount of bitwise operations we need to do on the data. Add code which parses the lower 32bits into a flexible word and its offset. This will be used in a future patch to enable flexible filters which can match on some arbitrary data in the packet payload. For now, we just return -EOPNOTSUPP when this is used. Add code to fill in the user-def field when reporting the filter back, even though we don't actually implement any user-def fields yet. Additionally, ensure that we mask the extended FLOW_EXT bit from the flow_type now that we will be accepting filters which have the FLOW_EXT bit set (and thus make use of the user-def field). Change-Id: I238845035c179380a347baa8db8223304f5f6dd7 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: partition the ring_cookie to get VF indexJacob Keller2017-03-241-34/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Do not use the user-def field for determining the VF target. Instead, similar to ixgbe, partition the ring_cookie value into 8bits of VF index, along with 32bits of queue number. This is better than using the user-def field, because it leaves the field open for extension in a future patch which will enable flexible data. Also, this matches with convention used by ixgbe and other drivers. Change-Id: Ie36745186d817216b12f0313b99ec95cb8a9130c Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: allow changing input set for ntuple filtersJacob Keller2017-03-241-3/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support to detect when we can update the input set for each flow type. Because the hardware only supports a single input set for all flows of that matching type, the driver shall only allow the input set to change if there are no other configured filters for that flow type. Thus, the first filter added for each flow type is allowed to change the input set, and all future filters must match the same input set. Display a diagnostic message whenever the filter input set changes, and a warning whenever a filter cannot be accepted because it does not match the configured input set. Change-Id: Ic22e1c267ae37518bb036aca4a5694681449f283 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: restore default input set for each flow typeJacob Keller2017-03-242-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that the default input set is correctly reprogrammed when cleaning up after disabling flow director support. This ensures that the programmed value will be in a clean state. Although we do not yet have support for SCTPv4 filters, a future patch will add support for this protocol, so we will correctly restore the SCTPv4 input set here as well. Note that strictly speaking the default hardware value for SCTP includes matching the verification tag. However, the ethtool API does not have support for specifying this value, so there is no reason to keep the verification field enabled. This patch is the next step on the way to enabling partial tuple filters which will be implemented in a following patch. Change-Id: Ic22e1c267ae37518bb036aca4a5694681449f283 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: check current configured input set when adding ntuple filtersJacob Keller2017-03-242-15/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Do not assume that hardware has been programmed with the default mask, but instead read the input set registers to determine what is currently programmed. This ensures that all programmed filters match exactly how the hardware will interpret them, avoiding confusion regarding filter behavior. This sets the initial ground-work for allowing custom input sets where some fields are disabled. A future patch will fully implement this feature. Instead of using bitwise negation, we'll just explicitly check for the correct value. The use of htonl and htons are used to silence sparse warnings. The compiler should be able to handle the constant value and avoid actually performing a byteswap. Change-Id: I3d8db46cb28ea0afdaac8c5b31a2bfb90e3a4102 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: correctly honor the mask fields for ETHTOOL_SRXCLSRLINSJacob Keller2017-03-241-0/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current implementation of .set_rxnfc does not properly read the mask field for filter entries. This results in incorrect driver behavior, as we do not reject filters which have masks set to ignore some fields. The current implementation simply assumes that every part of the tuple or "input set" is specified. This results in filters not behaving as expected, and not working correctly. As a first step in supporting some partial filters, add code which checks the mask fields and rejects any filters which do not have an acceptable mask. For now, we just assume that all fields must be set. This will get the driver one step towards allowing some partial filters. At a minimum, the ethtool commands which previously installed filters that would not function will now return a non-zero exit code indicating failure instead. We should now be meeting the minimum requirements of the .set_rxnfc API, by ensuring that all filters we program have a valid mask value for each field. Finally, add code to report the mask correctly so that the ethtool command properly reports the mask to the user. Note that the typecast to (__be16) when checking source and destination port masks is required because the ~ bitwise negation operator does not correctly handle variables other than integer size. Change-Id: Ia020149e07c87aa3fcec7b2283621b887ef0546f Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | net: mpls: Fix setting ttl_propagate for rt2David Ahern2017-03-241-1/+1
| | | | | | | | | | | | | | | | | | Fix copy and paste error setting rt_ttl_propagate. Fixes: 5b441ac8784c1 ("mpls: allow TTL propagation to IP packets to be configured") Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Robert Shearman <rshearma@brocade.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | tcp: sysctl: Fix a race to avoid unexpected 0 window from spaceGao Feng2017-03-241-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because sysctl_tcp_adv_win_scale could be changed any time, so there is one race in tcp_win_from_space. For example, 1.sysctl_tcp_adv_win_scale<=0 (sysctl_tcp_adv_win_scale is negative now) 2.space>>(-sysctl_tcp_adv_win_scale) (sysctl_tcp_adv_win_scale is postive now) As a result, tcp_win_from_space returns 0. It is unexpected. Certainly if the compiler put the sysctl_tcp_adv_win_scale into one register firstly, then use the register directly, it would be ok. But we could not depend on the compiler behavior. Signed-off-by: Gao Feng <fgao@ikuai8.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: make in_aton() 32-bit internallyAlexey Dobriyan2017-03-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Converting IPv4 address doesn't need 64-bit arithmetic. Space savings: 10 bytes! add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-10 (-10) function old new delta in_aton 96 86 -10 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | liquidio: do not reset Octeon if NIC firmware was preloadedFelix Manlunas2017-03-242-13/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PF driver is incorrectly resetting Octeon when the module parameter "fw_type=none" is there. "fw_type=none" means the PF should not load any firmware to the NIC because Octeon is already running preloaded firmware. Fix it by putting an if (fw_type != none) around the reset code. Because the Octeon reset is now conditionally gone, when unloading the driver, conditionally send the RESET_PF command to the firmware who will then free up PF-related data structures. Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: Satanand Burla <satananda.burla@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Add sysctl to toggle early demux for tcp and udpsubashab@codeaurora.org2017-03-2412-14/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Certain system process significant unconnected UDP workload. It would be preferrable to disable UDP early demux for those systems and enable it for TCP only. By disabling UDP demux, we see these slight gains on an ARM64 system- 782 -> 788Mbps unconnected single stream UDPv4 633 -> 654Mbps unconnected UDPv4 different sources The performance impact can change based on CPU architecure and cache sizes. There will not much difference seen if entire UDP hash table is in cache. Both sysctls are enabled by default to preserve existing behavior. v1->v2: Change function pointer instead of adding conditional as suggested by Stephen. v2->v3: Read once in callers to avoid issues due to compiler optimizations. Also update commit message with the tests. v3->v4: Store and use read once result instead of querying pointer again incorrectly. v4->v5: Refactor to avoid errors due to compilation with IPV6={m,n} Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Suggested-by: Eric Dumazet <edumazet@google.com> Cc: Stephen Hemminger <stephen@networkplumber.org> Cc: Tom Herbert <tom@herbertland.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'systemport-tx-napi-improvements'David S. Miller2017-03-242-12/+74
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Florian Fainelli says: ==================== net: systemport: TX/NAPI improvements This patch series builds up on Doug's latest changes done in BCMGENET to reduce the number of spurious interrupts in NAPI, simplify pointer arithmetic and finally tracking of per TX ring statistics to be SMP friendly. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: systemport: Simplify circular pointer arithmeticFlorian Fainelli2017-03-241-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to c298ede2fe21 ("net: bcmgenet: simplify circular pointer arithmetic") we don't need to complex arthimetic since we always have a ring size that is a power of 2. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * | net: systemport: Clear status to reduce spurious interruptsFlorian Fainelli2017-03-241-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Do something similar to commit d5810ca3252a ("net: bcmgenet: clear status to reduce spurious interrupts") and clear interrupts right before servicing them. This reduces the number of interrupts by 10K interrupts/sec for a TX TCP session 1Gbits/sec. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>