summaryrefslogtreecommitdiffstats
path: root/mm/slab_common.c
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2017-02-23 00:41:36 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2017-02-23 01:41:27 +0100
commit17cc4dfeda97636d67e83de8cd41940b65a93bc7 (patch)
treef260f03ae6ec4feee0bd0e84d39d38f93158eb2b /mm/slab_common.c
parentslab: remove slub sysfs interface files early for empty memcg caches (diff)
downloadlinux-17cc4dfeda97636d67e83de8cd41940b65a93bc7.tar.xz
linux-17cc4dfeda97636d67e83de8cd41940b65a93bc7.zip
slab: use memcg_kmem_cache_wq for slab destruction operations
If there's contention on slab_mutex, queueing the per-cache destruction work item on the system_wq can unnecessarily create and tie up a lot of kworkers. Rename memcg_kmem_cache_create_wq to memcg_kmem_cache_wq and make it global and use that workqueue for the destruction work items too. While at it, convert the workqueue from an unbound workqueue to a per-cpu one with concurrency limited to 1. It's generally preferable to use per-cpu workqueues and concurrency limit of 1 is safe enough. This is suggested by Joonsoo Kim. Link: http://lkml.kernel.org/r/20170117235411.9408-11-tj@kernel.org Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Jay Vana <jsvana@fb.com> Acked-by: Vladimir Davydov <vdavydov@tarantool.org> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab_common.c')
-rw-r--r--mm/slab_common.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slab_common.c b/mm/slab_common.c
index c549296c7981..23ff74e61838 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -659,7 +659,7 @@ static void kmemcg_deactivate_rcufn(struct rcu_head *head)
* initialized eariler.
*/
INIT_WORK(&s->memcg_params.deact_work, kmemcg_deactivate_workfn);
- schedule_work(&s->memcg_params.deact_work);
+ queue_work(memcg_kmem_cache_wq, &s->memcg_params.deact_work);
}
/**