diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2016-07-27 00:25:23 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-07-27 01:19:19 +0200 |
commit | 7267ec008b5cd8b3579e188b1ff238815643e372 (patch) | |
tree | 06e45eb3b7b951799e452403dfaf77fefb726b54 /mm/filemap.c | |
parent | mm: introduce fault_env (diff) | |
download | linux-7267ec008b5cd8b3579e188b1ff238815643e372.tar.xz linux-7267ec008b5cd8b3579e188b1ff238815643e372.zip |
mm: postpone page table allocation until we have page to map
The idea (and most of code) is borrowed again: from Hugh's patchset on
huge tmpfs[1].
Instead of allocation pte page table upfront, we postpone this until we
have page to map in hands. This approach opens possibility to map the
page as huge if filesystem supports this.
Comparing to Hugh's patch I've pushed page table allocation a bit
further: into do_set_pte(). This way we can postpone allocation even in
faultaround case without moving do_fault_around() after __do_fault().
do_set_pte() got renamed to alloc_set_pte() as it can allocate page
table if required.
[1] http://lkml.kernel.org/r/alpine.LSU.2.11.1502202015090.14414@eggly.anvils
Link: http://lkml.kernel.org/r/1466021202-61880-10-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/filemap.c')
-rw-r--r-- | mm/filemap.c | 16 |
1 files changed, 10 insertions, 6 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index 54d5318f8d3f..1efd2994dccf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2144,11 +2144,6 @@ void filemap_map_pages(struct fault_env *fe, start_pgoff) { if (iter.index > end_pgoff) break; - fe->pte += iter.index - last_pgoff; - fe->address += (iter.index - last_pgoff) << PAGE_SHIFT; - last_pgoff = iter.index; - if (!pte_none(*fe->pte)) - goto next; repeat: page = radix_tree_deref_slot(slot); if (unlikely(!page)) @@ -2186,7 +2181,13 @@ repeat: if (file->f_ra.mmap_miss > 0) file->f_ra.mmap_miss--; - do_set_pte(fe, page); + + fe->address += (iter.index - last_pgoff) << PAGE_SHIFT; + if (fe->pte) + fe->pte += iter.index - last_pgoff; + last_pgoff = iter.index; + if (alloc_set_pte(fe, NULL, page)) + goto unlock; unlock_page(page); goto next; unlock: @@ -2194,6 +2195,9 @@ unlock: skip: put_page(page); next: + /* Huge page is mapped? No need to proceed. */ + if (pmd_trans_huge(*fe->pmd)) + break; if (iter.index == end_pgoff) break; } |