diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2016-07-27 00:26:18 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-07-27 01:19:19 +0200 |
commit | 800d8c63b2e989c2e349632d1648119bf5862f01 (patch) | |
tree | 8f72ce10bcdc01f17a5e8f2197ae4858fbab6bd4 /mm/swap.c | |
parent | shmem: get_unmapped_area align huge page (diff) | |
download | linux-800d8c63b2e989c2e349632d1648119bf5862f01.tar.xz linux-800d8c63b2e989c2e349632d1648119bf5862f01.zip |
shmem: add huge pages support
Here's basic implementation of huge pages support for shmem/tmpfs.
It's all pretty streight-forward:
- shmem_getpage() allcoates huge page if it can and try to inserd into
radix tree with shmem_add_to_page_cache();
- shmem_add_to_page_cache() puts the page onto radix-tree if there's
space for it;
- shmem_undo_range() removes huge pages, if it fully within range.
Partial truncate of huge pages zero out this part of THP.
This have visible effect on fallocate(FALLOC_FL_PUNCH_HOLE)
behaviour. As we don't really create hole in this case,
lseek(SEEK_HOLE) may have inconsistent results depending what
pages happened to be allocated.
- no need to change shmem_fault: core-mm will map an compound page as
huge if VMA is suitable;
Link: http://lkml.kernel.org/r/1466021202-61880-30-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to '')
-rw-r--r-- | mm/swap.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/swap.c b/mm/swap.c index 90530ff8ed16..616df4ddd870 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -292,6 +292,7 @@ static bool need_activate_page_drain(int cpu) void activate_page(struct page *page) { + page = compound_head(page); if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { struct pagevec *pvec = &get_cpu_var(activate_page_pvecs); @@ -316,6 +317,7 @@ void activate_page(struct page *page) { struct zone *zone = page_zone(page); + page = compound_head(page); spin_lock_irq(&zone->lru_lock); __activate_page(page, mem_cgroup_page_lruvec(page, zone), NULL); spin_unlock_irq(&zone->lru_lock); |