diff options
author | Mandeep Singh Baines <msb@chromium.org> | 2012-01-04 06:18:31 +0100 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2012-01-21 00:58:13 +0100 |
commit | fb5d2b4cfc24963d0e8a7df57de1ecffa10a04cf (patch) | |
tree | 4603496bbe19740067195bf0669f7be484dbc950 /kernel | |
parent | cgroup: simplify double-check locking in cgroup_attach_proc (diff) | |
download | linux-fb5d2b4cfc24963d0e8a7df57de1ecffa10a04cf.tar.xz linux-fb5d2b4cfc24963d0e8a7df57de1ecffa10a04cf.zip |
cgroup: replace tasklist_lock with rcu_read_lock
We can replace the tasklist_lock in cgroup_attach_proc with an
rcu_read_lock().
Changes in V4:
* https://lkml.org/lkml/2011/12/23/284 (Frederic Weisbecker)
* Minimize size of rcu_read_lock critical section
* Add comment
* https://lkml.org/lkml/2011/12/26/136 (Li Zefan)
* Split into two patches
Changes in V3:
* https://lkml.org/lkml/2011/12/22/419 (Frederic Weisbecker)
* Add an rcu_read_lock to protect against exit
Changes in V2:
* https://lkml.org/lkml/2011/12/22/86 (Tejun Heo)
* Use a goto instead of returning -EAGAIN
Suggested-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Acked-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: containers@lists.linux-foundation.org
Cc: cgroups@vger.kernel.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul Menage <paul@paulmenage.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/cgroup.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 12c07e8fd69c..1626152dcc1e 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -2102,10 +2102,14 @@ static int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader) if (retval) goto out_free_group_list; - /* prevent changes to the threadgroup list while we take a snapshot. */ - read_lock(&tasklist_lock); tsk = leader; i = 0; + /* + * Prevent freeing of tasks while we take a snapshot. Tasks that are + * already PF_EXITING could be freed from underneath us unless we + * take an rcu_read_lock. + */ + rcu_read_lock(); do { struct task_and_cgroup ent; @@ -2128,11 +2132,11 @@ static int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader) BUG_ON(retval != 0); i++; } while_each_thread(leader, tsk); + rcu_read_unlock(); /* remember the number of threads in the array for later. */ group_size = i; tset.tc_array = group; tset.tc_array_len = group_size; - read_unlock(&tasklist_lock); /* methods shouldn't be called if no task is actually migrating */ retval = 0; |