summaryrefslogtreecommitdiffstats
path: root/mm/huge_memory.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* mm: drop the 'anon_' prefix for swap-out mTHP countersBaolin Wang2024-06-061-4/+4
* thp: remove HPAGE_PMD_ORDER minimum assertionMatthew Wilcox (Oracle)2024-05-071-5/+0
* mm: fix race between __split_huge_pmd_locked() and GUP-fastRyan Roberts2024-05-071-23/+26
* mm: delay the check for a NULL anon_vmaMatthew Wilcox (Oracle)2024-05-061-2/+4
* mm: simplify thp_vma_allowable_orderMatthew Wilcox2024-05-061-2/+5
* mm/huge_memory: improve split_huge_page_to_list_to_order() return value docum...David Hildenbrand2024-05-061-3/+11
* userfaultfd: remove WRITE_ONCE when setting folio->index during UFFDIO_MOVESuren Baghdasaryan2024-05-061-1/+1
* mm: add per-order mTHP anon_swpout and anon_swpout_fallback countersBarry Song2024-05-061-0/+4
* mm: add per-order mTHP anon_fault_alloc and anon_fault_fallback countersBarry Song2024-05-061-0/+52
* mm/huge_memory: use folio_mapcount() in zap_huge_pmd() sanity checkDavid Hildenbrand2024-05-061-1/+1
* mm: swap: remove CLUSTER_FLAG_HUGE from swap_cluster_info:flagsRyan Roberts2024-04-261-3/+0
* mm: huge_memory: add the missing folio_test_pmd_mappable() for THP split stat...Baolin Wang2024-04-261-2/+5
* thp: add thp_get_unmapped_area_vmflags()Rick Edgecombe2024-04-261-7/+16
* mm: switch mm->get_unmapped_area() to a flagRick Edgecombe2024-04-261-5/+4
* mm/gup: handle huge pmd for follow_pmd_mask()Peter Xu2024-04-261-84/+2
* mm/gup: handle huge pud for follow_pud_mask()Peter Xu2024-04-261-45/+2
* mm: rename mm_put_huge_zero_page to mm_put_huge_zero_folioMatthew Wilcox (Oracle)2024-04-261-1/+1
* mm: convert do_huge_pmd_anonymous_page to huge_zero_folioMatthew Wilcox (Oracle)2024-04-261-11/+12
* mm: convert huge_zero_page to huge_zero_folioMatthew Wilcox (Oracle)2024-04-261-14/+14
* mm: add pmd_folio()Matthew Wilcox (Oracle)2024-04-261-3/+3
* mm: add is_huge_zero_folio()Matthew Wilcox (Oracle)2024-04-261-3/+3
* huge_memory.c: document huge page splitting rules more thoroughlyJohn Hubbard2024-04-261-15/+27
* mm: convert folio_estimated_sharers() to folio_likely_mapped_shared()David Hildenbrand2024-04-261-1/+1
* mm: remove folio_prep_large_rmappable()Matthew Wilcox (Oracle)2024-04-261-8/+1
* mm: always initialise folio->_deferred_listMatthew Wilcox (Oracle)2024-04-261-2/+0
* mm: create new codetag references during page splittingSuren Baghdasaryan2024-04-261-0/+2
* mm/mempolicy: use numa_node_id() instead of cpu_to_node()Donet Tom2024-04-261-1/+1
* userfaultfd: change src_folio after ensuring it's unpinned in UFFDIO_MOVELokesh Gidra2024-04-171-3/+3
* mm/huge_memory: skip invalid debugfs new_order input for folio splitZi Yan2024-03-121-0/+6
* mm/huge_memory: check new folio order when split a folioZi Yan2024-03-121-0/+3
* mm: use folio more widely in __split_huge_pageMatthew Wilcox (Oracle)2024-03-051-10/+11
* mm: huge_memory: enable debugfs to split huge pages to any orderZi Yan2024-03-051-12/+22
* mm: thp: split huge page to any lower order pagesZi Yan2024-03-051-24/+83
* mm: page_owner: add support for splitting to any order in split page_ownerZi Yan2024-03-051-1/+1
* mm: memcg: make memcg huge page split support any order splitZi Yan2024-03-051-1/+1
* mm/page_owner: use order instead of nr in split_page_owner()Zi Yan2024-03-051-1/+1
* mm/memcg: use order instead of nr in split_page_memcg()Zi Yan2024-03-051-2/+3
* mm: support order-1 folios in the page cacheMatthew Wilcox (Oracle)2024-03-051-4/+15
* mm/huge_memory: only split PMD mapping when necessary in unmap_folio()Zi Yan2024-03-051-2/+5
* userfaultfd: use per-vma locks in userfaultfd operationsLokesh Gidra2024-02-231-2/+3
* mm: thp: batch-collapse PMD with set_ptes()Ryan Roberts2024-02-231-25/+33
* userfaultfd: handle zeropage moves by UFFDIO_MOVESuren Baghdasaryan2024-02-221-44/+61
* mm: convert mm_counter_file() to take a folioKefeng Wang2024-02-221-2/+2
* mm: use pfn_swap_entry_to_folio() in zap_huge_pmd()Kefeng Wang2024-02-221-7/+10
* mm: use pfn_swap_entry_folio() in __split_huge_pmd_locked()Kefeng Wang2024-02-221-2/+2
* mm: add pfn_swap_entry_folio()Matthew Wilcox (Oracle)2024-02-221-1/+1
* mm: thp_get_unmapped_area must honour topdown preferenceRyan Roberts2024-01-261-2/+8
* mm: huge_memory: don't force huge page alignment on 32 bitYang Shi2024-01-261-0/+4
* mm/huge_memory: fix folio_set_dirty() vs. folio_mark_dirty()David Hildenbrand2024-01-261-2/+2
* Merge tag 'mm-stable-2024-01-08-15-31' of git://git.kernel.org/pub/scm/linux/...Linus Torvalds2024-01-091-71/+385
|\