diff options
author | Nick Piggin <npiggin@suse.de> | 2009-01-06 23:38:55 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-07 00:58:58 +0100 |
commit | bf3f3bc5e734706730c12a323f9b2068052aa1f0 (patch) | |
tree | d93fb6beb0916cc10aeb5674578bfa3ac40371c9 /mm/memory.c | |
parent | mm: report the MMU pagesize in /proc/pid/smaps (diff) | |
download | linux-bf3f3bc5e734706730c12a323f9b2068052aa1f0.tar.xz linux-bf3f3bc5e734706730c12a323f9b2068052aa1f0.zip |
mm: don't mark_page_accessed in fault path
Doing a mark_page_accessed at fault-time, then doing SetPageReferenced at
unmap-time if the pte is young has a number of problems.
mark_page_accessed is supposed to be roughly the equivalent of a young pte
for unmapped references. Unfortunately it doesn't come with any context:
after being called, reclaim doesn't know who or why the page was touched.
So calling mark_page_accessed not only adds extra lru or PG_referenced
manipulations for pages that are already going to have pte_young ptes anyway,
but it also adds these references which are difficult to work with from the
context of vma specific references (eg. MADV_SEQUENTIAL pte_young may not
wish to contribute to the page being referenced).
Then, simply doing SetPageReferenced when zapping a pte and finding it is
young, is not a really good solution either. SetPageReferenced does not
correctly promote the page to the active list for example. So after removing
mark_page_accessed from the fault path, several mmap()+touch+munmap() would
have a very different result from several read(2) calls for example, which
is not really desirable.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Johannes Weiner <hannes@saeurebad.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r-- | mm/memory.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c index 7b9db658aca2..5e0e91cc6b67 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -768,7 +768,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (pte_dirty(ptent)) set_page_dirty(page); if (pte_young(ptent)) - SetPageReferenced(page); + mark_page_accessed(page); file_rss--; } page_remove_rmap(page, vma); |