summaryrefslogtreecommitdiffstats
path: root/tools/testing/selftests/drivers/net/mlxsw/sch_ets.sh
diff options
context:
space:
mode:
authorAmit Cohen <amcohen@nvidia.com>2022-09-14 13:21:48 +0200
committerJakub Kicinski <kuba@kernel.org>2022-09-20 03:07:59 +0200
commit9e7aaa7c65f170039501c4d4b24d99640e2d519a (patch)
treef09a3a7a6a164e27dddecd899d12d868ad812993 /tools/testing/selftests/drivers/net/mlxsw/sch_ets.sh
parentMerge branch 'remove-label-cpu-from-dsa-dt-bindings' (diff)
downloadlinux-9e7aaa7c65f170039501c4d4b24d99640e2d519a.tar.xz
linux-9e7aaa7c65f170039501c4d4b24d99640e2d519a.zip
selftests: mlxsw: Use shapers in QOS tests instead of forcing speed
QOS tests create congestion and verify the switch behavior. To create congestion, they need to have more traffic than the port can handle, so some of them force 1Gbps speed. The tests assume that 1Gbps speed is supported, otherwise, they will fail. Spectrum-4 ASIC will not support this speed in all ports, so to be able to run QOS tests there, some adjustments are required. Use shapers to limit the traffic instead of forcing speed. Note that for several ports, the speed configuration is just for autoneg issues, so shaper is not needed instead. In tests that already use shapers, set the existing shaper to be a child of a new TBF shaper which is added as a root qdisc and acts as a port shaper. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'tools/testing/selftests/drivers/net/mlxsw/sch_ets.sh')
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/sch_ets.sh15
1 files changed, 8 insertions, 7 deletions
diff --git a/tools/testing/selftests/drivers/net/mlxsw/sch_ets.sh b/tools/testing/selftests/drivers/net/mlxsw/sch_ets.sh
index af64bc9ea8ab..ceaa76b17a43 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/sch_ets.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/sch_ets.sh
@@ -15,13 +15,15 @@ ALL_TESTS="
ets_test_dwrr
"
+PARENT="parent 3:3"
+
switch_create()
{
- ets_switch_create
-
# Create a bottleneck so that the DWRR process can kick in.
- ethtool -s $h2 speed 1000 autoneg off
- ethtool -s $swp2 speed 1000 autoneg off
+ tc qdisc replace dev $swp2 root handle 3: tbf rate 1gbit \
+ burst 128K limit 1G
+
+ ets_switch_create
# Set the ingress quota high and use the three egress TCs to limit the
# amount of traffic that is admitted to the shared buffers. This makes
@@ -55,10 +57,9 @@ switch_destroy()
devlink_tc_bind_pool_th_restore $swp1 0 ingress
devlink_port_pool_th_restore $swp1 0
- ethtool -s $swp2 autoneg on
- ethtool -s $h2 autoneg on
-
ets_switch_destroy
+
+ tc qdisc del dev $swp2 root handle 3:
}
# Callback from sch_ets_tests.sh