diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2011-03-04 23:33:59 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-03-04 23:33:59 +0100 |
commit | 65e8354ec13a45414045084166cb340c0d7ffe8a (patch) | |
tree | 3a7d8b63a3765343c07d6f878b4ef389041da6f0 /fs/compat_ioctl.c | |
parent | Merge branch 'for-davem' of ssh://master.kernel.org/pub/scm/linux/kernel/git/... (diff) | |
download | linux-65e8354ec13a45414045084166cb340c0d7ffe8a.tar.xz linux-65e8354ec13a45414045084166cb340c0d7ffe8a.zip |
inetpeer: seqlock optimization
David noticed :
------------------
Eric, I was profiling the non-routing-cache case and something that
stuck out is the case of calling inet_getpeer() with create==0.
If an entry is not found, we have to redo the lookup under a spinlock
to make certain that a concurrent writer rebalancing the tree does
not "hide" an existing entry from us.
This makes the case of a create==0 lookup for a not-present entry
really expensive. It is on the order of 600 cpu cycles on my
Niagara2.
I added a hack to not do the relookup under the lock when create==0
and it now costs less than 300 cycles.
This is now a pretty common operation with the way we handle COW'd
metrics, so I think it's definitely worth optimizing.
-----------------
One solution is to use a seqlock instead of a spinlock to protect struct
inet_peer_base.
After a failed avl tree lookup, we can easily detect if a writer did
some changes during our lookup. Taking the lock and redo the lookup is
only necessary in this case.
Note: Add one private rcu_deref_locked() macro to place in one spot the
access to spinlock included in seqlock.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'fs/compat_ioctl.c')
0 files changed, 0 insertions, 0 deletions