diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2009-12-15 02:59:45 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-12-15 17:53:20 +0100 |
commit | caed0f486e582eeeb6e3546417fd758230fe4ad9 (patch) | |
tree | 203d4724dcfaae3ad4b349e1971bb3efc0da7db2 /mm/rmap.c | |
parent | mm: fix section mismatch in memory_hotplug.c (diff) | |
download | linux-caed0f486e582eeeb6e3546417fd758230fe4ad9.tar.xz linux-caed0f486e582eeeb6e3546417fd758230fe4ad9.zip |
mm: simplify try_to_unmap_one()
SWAP_MLOCK mean "We marked the page as PG_MLOCK, please move it to
unevictable-lru". So, following code is easy confusable.
if (vma->vm_flags & VM_LOCKED) {
ret = SWAP_MLOCK;
goto out_unmap;
}
Plus, if the VMA doesn't have VM_LOCKED, We don't need to check
the needed of calling mlock_vma_page().
Also, add some commentary to try_to_unmap_one().
Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/rmap.c')
-rw-r--r-- | mm/rmap.c | 35 |
1 files changed, 22 insertions, 13 deletions
diff --git a/mm/rmap.c b/mm/rmap.c index c81bedd7d527..98135dbd25ba 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -789,10 +789,9 @@ int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * skipped over this mm) then we should reactivate it. */ if (!(flags & TTU_IGNORE_MLOCK)) { - if (vma->vm_flags & VM_LOCKED) { - ret = SWAP_MLOCK; - goto out_unmap; - } + if (vma->vm_flags & VM_LOCKED) + goto out_mlock; + if (TTU_ACTION(flags) == TTU_MUNLOCK) goto out_unmap; } @@ -865,18 +864,28 @@ int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, out_unmap: pte_unmap_unlock(pte, ptl); +out: + return ret; - if (ret == SWAP_MLOCK) { - ret = SWAP_AGAIN; - if (down_read_trylock(&vma->vm_mm->mmap_sem)) { - if (vma->vm_flags & VM_LOCKED) { - mlock_vma_page(page); - ret = SWAP_MLOCK; - } - up_read(&vma->vm_mm->mmap_sem); +out_mlock: + pte_unmap_unlock(pte, ptl); + + + /* + * We need mmap_sem locking, Otherwise VM_LOCKED check makes + * unstable result and race. Plus, We can't wait here because + * we now hold anon_vma->lock or mapping->i_mmap_lock. + * if trylock failed, the page remain in evictable lru and later + * vmscan could retry to move the page to unevictable lru if the + * page is actually mlocked. + */ + if (down_read_trylock(&vma->vm_mm->mmap_sem)) { + if (vma->vm_flags & VM_LOCKED) { + mlock_vma_page(page); + ret = SWAP_MLOCK; } + up_read(&vma->vm_mm->mmap_sem); } -out: return ret; } |