summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* mm: memcontrol: only mark charged pages with PageKmemcgVladimir Davydov2016-08-091-9/+5
* mm: initialise per_cpu_nodestats for all online pgdats at bootMel Gorman2016-08-051-5/+5
* treewide: replace obsolete _refok by __refFabian Frederick2016-08-021-2/+2
* mm, compaction: simplify contended compaction handlingVlastimil Babka2016-07-291-27/+1
* mm, compaction: introduce direct compaction priorityVlastimil Babka2016-07-291-14/+14
* mm, thp: remove __GFP_NORETRY from khugepaged and madvised allocationsVlastimil Babka2016-07-291-4/+2
* mm, page_alloc: make THP-specific decisions more genericVlastimil Babka2016-07-291-13/+9
* mm, page_alloc: restructure direct compaction handling in slowpathVlastimil Babka2016-07-291-52/+57
* mm, page_alloc: don't retry initial attempt in slowpathVlastimil Babka2016-07-291-11/+18
* mm, page_alloc: set alloc_flags only once in slowpathVlastimil Babka2016-07-291-26/+26
* mm: track NR_KERNEL_STACK in KiB instead of number of stacksAndy Lutomirski2016-07-291-2/+1
* mm: remove reclaim and compaction retry approximationsMel Gorman2016-07-291-39/+10
* mm: add per-zone lru list statMinchan Kim2016-07-291-0/+10
* mm: show node_pages_scanned per node, not zoneMinchan Kim2016-07-291-3/+3
* mm, vmstat: remove zone and node double accounting by approximating retriesMel Gorman2016-07-291-12/+43
* mm: vmstat: replace __count_zone_vm_events with a zone id equivalentMel Gorman2016-07-291-1/+1
* mm: page_alloc: cache the last node whose dirty limit is reachedMel Gorman2016-07-291-2/+11
* mm, page_alloc: remove fair zone allocation policyMel Gorman2016-07-291-74/+1
* mm: convert zone_reclaim to node_reclaimMel Gorman2016-07-291-8/+16
* mm, page_alloc: wake kswapd based on the highest eligible zoneMel Gorman2016-07-291-1/+1
* mm, vmscan: only wakeup kswapd once per node for the requested classzoneMel Gorman2016-07-291-2/+6
* mm: move most file-based accounting to the nodeMel Gorman2016-07-291-42/+32
* mm: move page mapped accounting to the nodeMel Gorman2016-07-291-3/+3
* mm, page_alloc: consider dirtyable memory in terms of nodesMel Gorman2016-07-291-15/+11
* mm, vmscan: make shrink_node decisions more node-centricMel Gorman2016-07-291-1/+1
* mm, vmscan: simplify the logic deciding whether kswapd sleepsMel Gorman2016-07-291-1/+1
* mm, vmscan: move LRU lists to nodeMel Gorman2016-07-291-31/+37
* mm, vmscan: move lru_lock to the nodeMel Gorman2016-07-291-2/+2
* mm, vmstat: add infrastructure for per-node vmstatsMel Gorman2016-07-291-2/+8
* mm, meminit: remove early_page_nid_uninitialisedMel Gorman2016-07-291-13/+0
* mm: update the comment in __isolate_free_pagezhong jiang2016-07-291-1/+4
* mm, rmap: account shmem thp pagesKirill A. Shutemov2016-07-271-0/+19
* thp, mlock: do not mlock PTE-mapped file huge pagesKirill A. Shutemov2016-07-271-0/+2
* mm: charge/uncharge kmemcg from generic page allocator pathsVladimir Davydov2016-07-271-53/+13
* mm: memcontrol: cleanup kmem charge functionsVladimir Davydov2016-07-271-3/+6
* mm/page_alloc: introduce post allocation processing on page allocatorJoonsoo Kim2016-07-271-9/+14
* mm/page_owner: introduce split_page_owner and replace manual handlingJoonsoo Kim2016-07-271-6/+2
* mm/page_owner: initialize page owner without holding the zone lockJoonsoo Kim2016-07-271-2/+0
* mm/compaction: split freepages without holding the zone lockJoonsoo Kim2016-07-271-27/+0
* mm: migrate: support non-lru movable page migrationMinchan Kim2016-07-271-1/+1
* mm: oom: add memcg to oom_controlVladimir Davydov2016-07-271-0/+1
* mm/init: fix zone boundary creationOliver O'Halloran2016-07-271-7/+10
* mm, meminit: ensure node is online before checking whether pages are uninitia...Mel Gorman2016-07-151-1/+3
* mm, meminit: always return a valid node from early_pfn_to_nidMel Gorman2016-07-151-1/+1
* mm, page_alloc: recalculate the preferred zoneref if the context can ignore m...Mel Gorman2016-06-041-7/+16
* mm, page_alloc: reset zonelist iterator after resetting fair zone allocation ...Mel Gorman2016-06-041-0/+1
* mm, page_alloc: prevent infinite loop in buffered_rmqueue()Vlastimil Babka2016-06-041-4/+5
* mm: check the return value of lookup_page_ext for all call sitesYang Shi2016-06-041-0/+6
* mm: check_new_page_bad() directly returns in __PG_HWPOISON caseNaoya Horiguchi2016-05-211-6/+3
* mm, kasan: fix to call kasan_free_pages() after poisoning pageseokhoon.yoon2016-05-211-1/+1