summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorDave Chinner <dchinner@redhat.com>2011-07-08 06:14:37 +0200
committerAl Viro <viro@zeniv.linux.org.uk>2011-07-20 07:44:32 +0200
commite9299f5058595a655c3b207cda9635e28b9197e6 (patch)
treeb31a4dc5cab98ee1701313f45e92e583c2d76f63 /include
parentvmscan: reduce wind up shrinker->nr when shrinker can't do work (diff)
downloadlinux-e9299f5058595a655c3b207cda9635e28b9197e6.tar.xz
linux-e9299f5058595a655c3b207cda9635e28b9197e6.zip
vmscan: add customisable shrinker batch size
For shrinkers that have their own cond_resched* calls, having shrink_slab break the work down into small batches is not paticularly efficient. Add a custom batchsize field to the struct shrinker so that shrinkers can use a larger batch size if they desire. A value of zero (uninitialised) means "use the default", so behaviour is unchanged by this patch. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Diffstat (limited to 'include')
-rw-r--r--include/linux/mm.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9670f71d7be9..9b9777ac726d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1150,6 +1150,7 @@ struct shrink_control {
struct shrinker {
int (*shrink)(struct shrinker *, struct shrink_control *sc);
int seeks; /* seeks to recreate an obj */
+ long batch; /* reclaim batch size, 0 = default */
/* These are for internal use */
struct list_head list;