diff options
author | Eric Dumazet <edumazet@google.com> | 2019-10-19 00:20:05 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2019-10-19 21:21:53 +0200 |
commit | 2a06b8982f8f2f40d03a3daf634676386bd84dbc (patch) | |
tree | 76d330882a9159b334f734201e405121383b1be4 /Documentation | |
parent | net: dsa: fix switch tree list (diff) | |
download | linux-2a06b8982f8f2f40d03a3daf634676386bd84dbc.tar.xz linux-2a06b8982f8f2f40d03a3daf634676386bd84dbc.zip |
net: reorder 'struct net' fields to avoid false sharing
Intel test robot reported a ~7% regression on TCP_CRR tests
that they bisected to the cited commit.
Indeed, every time a new TCP socket is created or deleted,
the atomic counter net->count is touched (via get_net(net)
and put_net(net) calls)
So cpus might have to reload a contended cache line in
net_hash_mix(net) calls.
We need to reorder 'struct net' fields to move @hash_mix
in a read mostly cache line.
We move in the first cache line fields that can be
dirtied often.
We probably will have to address in a followup patch
the __randomize_layout that was added in linux-4.13,
since this might break our placement choices.
Fixes: 355b98553789 ("netns: provide pure entropy for net_hash_mix()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'Documentation')
0 files changed, 0 insertions, 0 deletions