summaryrefslogtreecommitdiffstats
path: root/mm/shmem.c
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2020-06-04 01:01:34 +0200
committerLinus Torvalds <torvalds@linux-foundation.org>2020-06-04 05:09:47 +0200
commit14235ab36019d169f5eb5bf0c064c5b12ca1bf46 (patch)
tree123b8f8b312d3e4c4adab0514331eaa8355ea0fa /mm/shmem.c
parentmm: memcontrol: drop @compound parameter from memcg charging API (diff)
downloadlinux-14235ab36019d169f5eb5bf0c064c5b12ca1bf46.tar.xz
linux-14235ab36019d169f5eb5bf0c064c5b12ca1bf46.zip
mm: shmem: remove rare optimization when swapin races with hole punching
Commit 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() VM_BUG_ON") recognized that hole punching can race with swapin and removed the BUG_ON() for a truncated entry from the swapin path. The patch also added a swapcache deletion to optimize this rare case: Since swapin has the page locked, and free_swap_and_cache() merely trylocks, this situation can leave the page stranded in swapcache. Usually, page reclaim picks up stale swapcache pages, and the race can happen at any other time when the page is locked. (The same happens for non-shmem swapin racing with page table zapping.) The thinking here was: we already observed the race and we have the page locked, we may as well do the cleanup instead of waiting for reclaim. However, this optimization complicates the next patch which moves the cgroup charging code around. As this is just a minor speedup for a race condition that is so rare that it required a fuzzer to trigger the original BUG_ON(), it's no longer worth the complications. Suggested-by: Hugh Dickins <hughd@google.com> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Hugh Dickins <hughd@google.com> Cc: Alex Shi <alex.shi@linux.alibaba.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <guro@fb.com> Cc: Balbir Singh <bsingharora@gmail.com> Link: http://lkml.kernel.org/r/20200511181056.GA339505@cmpxchg.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r--mm/shmem.c25
1 files changed, 7 insertions, 18 deletions
diff --git a/mm/shmem.c b/mm/shmem.c
index d505b6cce4ab..729bbb3513cd 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1665,27 +1665,16 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index,
}
error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg);
- if (!error) {
- error = shmem_add_to_page_cache(page, mapping, index,
- swp_to_radix_entry(swap), gfp);
- /*
- * We already confirmed swap under page lock, and make
- * no memory allocation here, so usually no possibility
- * of error; but free_swap_and_cache() only trylocks a
- * page, so it is just possible that the entry has been
- * truncated or holepunched since swap was confirmed.
- * shmem_undo_range() will have done some of the
- * unaccounting, now delete_from_swap_cache() will do
- * the rest.
- */
- if (error) {
- mem_cgroup_cancel_charge(page, memcg);
- delete_from_swap_cache(page);
- }
- }
if (error)
goto failed;
+ error = shmem_add_to_page_cache(page, mapping, index,
+ swp_to_radix_entry(swap), gfp);
+ if (error) {
+ mem_cgroup_cancel_charge(page, memcg);
+ goto failed;
+ }
+
mem_cgroup_commit_charge(page, memcg, true);
spin_lock_irq(&info->lock);