diff options
author | Dave Chinner <dchinner@redhat.com> | 2013-08-28 02:18:01 +0200 |
---|---|---|
committer | Al Viro <viro@zeniv.linux.org.uk> | 2013-09-11 00:56:30 +0200 |
commit | 5cedf721a7cdb54e9222133516c916210d836470 (patch) | |
tree | ad88b1e86956e75c173fe70206fa9c40d3d2a86f /mm | |
parent | list_lru: per-node list infrastructure (diff) | |
download | linux-5cedf721a7cdb54e9222133516c916210d836470.tar.xz linux-5cedf721a7cdb54e9222133516c916210d836470.zip |
list_lru: fix broken LRU_RETRY behaviour
The LRU_RETRY code assumes that the list traversal status after we have
dropped and regained the list lock. Unfortunately, this is not a valid
assumption, and that can lead to racing traversals isolating objects that
the other traversal expects to be the next item on the list.
This is causing problems with the inode cache shrinker isolation, with
races resulting in an inode on a dispose list being "isolated" because a
racing traversal still thinks it is on the LRU. The inode is then never
reclaimed and that causes hangs if a subsequent lookup on that inode
occurs.
Fix it by always restarting the list walk on a LRU_RETRY return from the
isolate callback. Avoid the possibility of livelocks the current code was
trying to avoid by always decrementing the nr_to_walk counter on retries
so that even if we keep hitting the same item on the list we'll eventually
stop trying to walk and exit out of the situation causing the problem.
Reported-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Cc: Glauber Costa <glommer@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/list_lru.c | 29 |
1 files changed, 12 insertions, 17 deletions
diff --git a/mm/list_lru.c b/mm/list_lru.c index 1efe4ecc02b1..e77c29f4c243 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -73,19 +73,19 @@ list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, struct list_lru_node *nlru = &lru->node[nid]; struct list_head *item, *n; unsigned long isolated = 0; - /* - * If we don't keep state of at which pass we are, we can loop at - * LRU_RETRY, since we have no guarantees that the caller will be able - * to do something other than retry on the next pass. We handle this by - * allowing at most one retry per object. This should not be altered - * by any condition other than LRU_RETRY. - */ - bool first_pass = true; spin_lock(&nlru->lock); restart: list_for_each_safe(item, n, &nlru->list) { enum lru_status ret; + + /* + * decrement nr_to_walk first so that we don't livelock if we + * get stuck on large numbesr of LRU_RETRY items + */ + if (--(*nr_to_walk) == 0) + break; + ret = isolate(item, &nlru->lock, cb_arg); switch (ret) { case LRU_REMOVED: @@ -100,19 +100,14 @@ restart: case LRU_SKIP: break; case LRU_RETRY: - if (!first_pass) { - first_pass = true; - break; - } - first_pass = false; + /* + * The lru lock has been dropped, our list traversal is + * now invalid and so we have to restart from scratch. + */ goto restart; default: BUG(); } - - if ((*nr_to_walk)-- == 0) - break; - } spin_unlock(&nlru->lock); |