summaryrefslogtreecommitdiffstats
path: root/mm/memblock.c
diff options
context:
space:
mode:
authorJakub Kicinski <kuba@kernel.org>2024-03-07 23:11:22 +0100
committerDavid S. Miller <davem@davemloft.net>2024-03-11 11:22:06 +0100
commit900b2801bf250affe410193a0d27a2ba9f2db4e5 (patch)
treedf32412674788b5595094763d5d633a30e67cc23 /mm/memblock.c
parentudp: no longer touch sk->sk_refcnt in early demux (diff)
downloadlinux-900b2801bf250affe410193a0d27a2ba9f2db4e5.tar.xz
linux-900b2801bf250affe410193a0d27a2ba9f2db4e5.zip
ynl: samples: fix recycling rate calculation
Running the page-pool sample on production machines under moderate networking load shows recycling rate higher than 100%: $ page-pool eth0[2] page pools: 14 (zombies: 0) refs: 89088 bytes: 364904448 (refs: 0 bytes: 0) recycling: 100.3% (alloc: 1392:2290247724 recycle: 469289484:1828235386) Note that outstanding refs (89088) == slow alloc * cache size (1392 * 64) which means this machine is recycling page pool pages perfectly, not a single page has been released. The extra 0.3% is because sample ignores allocations from the ptr_ring. Treat those the same as alloc_fast, the ring vs cache alloc is already captured accurately enough by recycling stats. With the fix: $ page-pool eth0[2] page pools: 14 (zombies: 0) refs: 89088 bytes: 364904448 (refs: 0 bytes: 0) recycling: 100.0% (alloc: 1392:2331141604 recycle: 473625579:1857460661) Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'mm/memblock.c')
0 files changed, 0 insertions, 0 deletions