diff options
author | Eric Wong <normalperson@yhbt.net> | 2013-05-01 00:27:38 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-05-01 02:04:04 +0200 |
commit | 39732ca5af4b09f4db561149041ddad7211019a5 (patch) | |
tree | 654d9dc83e3884358e02dd7d765a22423334840d /fs/eventpoll.c | |
parent | kernel/timer.c: move some non timer related syscalls to kernel/sys.c (diff) | |
download | linux-39732ca5af4b09f4db561149041ddad7211019a5.tar.xz linux-39732ca5af4b09f4db561149041ddad7211019a5.zip |
epoll: trim epitem by one cache line
It is common for epoll users to have thousands of epitems, so saving a
cache line on every allocation leads to large memory savings.
Since epitem allocations are cache-aligned, reducing sizeof(struct
epitem) from 136 bytes to 128 bytes will allow it to squeeze under a
cache line boundary on x86_64.
Via /sys/kernel/slab/eventpoll_epi, I see the following changes on my
x86_64 Core2 Duo (which has 64-byte cache alignment):
object_size : 192 => 128
objs_per_slab: 21 => 32
Also, add a BUILD_BUG_ON() to check for future accidental breakage.
[akpm@linux-foundation.org: use __packed, for all architectures]
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Cc: Davide Libenzi <davidel@xmailserver.org>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs/eventpoll.c')
-rw-r--r-- | fs/eventpoll.c | 10 |
1 files changed, 9 insertions, 1 deletions
diff --git a/fs/eventpoll.c b/fs/eventpoll.c index 9fec1836057a..0e5eda068520 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -104,7 +104,7 @@ struct epoll_filefd { struct file *file; int fd; -}; +} __packed; /* * Structure used to track possible nested calls, for too deep recursions @@ -128,6 +128,8 @@ struct nested_calls { /* * Each file descriptor added to the eventpoll interface will * have an entry of this type linked to the "rbr" RB tree. + * Avoid increasing the size of this struct, there can be many thousands + * of these on a server and we do not want this to take another cache line. */ struct epitem { /* RB tree node used to link this structure to the eventpoll RB tree */ @@ -1964,6 +1966,12 @@ static int __init eventpoll_init(void) /* Initialize the structure used to perform file's f_op->poll() calls */ ep_nested_calls_init(&poll_readywalk_ncalls); + /* + * We can have many thousands of epitems, so prevent this from + * using an extra cache line on 64-bit (and smaller) CPUs + */ + BUILD_BUG_ON(sizeof(void *) <= 8 && sizeof(struct epitem) > 128); + /* Allocates slab cache used to allocate "struct epitem" items */ epi_cache = kmem_cache_create("eventpoll_epi", sizeof(struct epitem), 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL); |