summaryrefslogtreecommitdiffstats
path: root/mm (follow)
Commit message (Expand)AuthorAgeFilesLines
...
| * | mm: consider whether to decivate based on eligible zones inactive ratioMel Gorman2016-07-291-5/+29
| * | mm: remove reclaim and compaction retry approximationsMel Gorman2016-07-296-58/+37
| * | mm, vmscan: remove highmem_file_pagesMel Gorman2016-07-291-8/+4
| * | mm: add per-zone lru list statMinchan Kim2016-07-293-9/+15
| * | mm, vmscan: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-291-11/+10
| * | mm, vmscan: remove redundant check in shrink_zones()Mel Gorman2016-07-291-3/+0
| * | mm, vmscan: Update all zone LRU sizes before updating memcgMel Gorman2016-07-292-11/+34
| * | mm: show node_pages_scanned per node, not zoneMinchan Kim2016-07-291-3/+3
| * | mm, pagevec: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-291-10/+10
| * | mm, page_alloc: fix dirtyable highmem calculationMinchan Kim2016-07-291-6/+10
| * | mm, vmstat: remove zone and node double accounting by approximating retriesMel Gorman2016-07-296-42/+67
| * | mm, vmstat: print node-based stats in zoneinfo fileMel Gorman2016-07-291-0/+24
| * | mm: vmstat: account per-zone stalls and pages skipped during reclaimMel Gorman2016-07-292-3/+15
| * | mm: vmstat: replace __count_zone_vm_events with a zone id equivalentMel Gorman2016-07-291-1/+1
| * | mm: page_alloc: cache the last node whose dirty limit is reachedMel Gorman2016-07-291-2/+11
| * | mm, page_alloc: remove fair zone allocation policyMel Gorman2016-07-293-78/+2
| * | mm, vmscan: add classzone information to tracepointsMel Gorman2016-07-291-5/+9
| * | mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads...Mel Gorman2016-07-291-8/+14
| * | mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep()Mel Gorman2016-07-291-8/+4
| * | mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_readyMel Gorman2016-07-291-20/+7
| * | mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_nodeMel Gorman2016-07-291-11/+9
| * | mm: convert zone_reclaim to node_reclaimMel Gorman2016-07-294-53/+60
| * | mm, page_alloc: wake kswapd based on the highest eligible zoneMel Gorman2016-07-291-1/+1
| * | mm, vmscan: only wakeup kswapd once per node for the requested classzoneMel Gorman2016-07-292-4/+17
| * | mm: move vmscan writes and file write accounting to the nodeMel Gorman2016-07-293-9/+9
| * | mm: move most file-based accounting to the nodeMel Gorman2016-07-2912-117/+107
| * | mm: rename NR_ANON_PAGES to NR_ANON_MAPPEDMel Gorman2016-07-292-5/+5
| * | mm: move page mapped accounting to the nodeMel Gorman2016-07-294-13/+13
| * | mm, page_alloc: consider dirtyable memory in terms of nodesMel Gorman2016-07-292-45/+72
| * | mm, workingset: make working set detection node-awareMel Gorman2016-07-292-40/+23
| * | mm, memcg: move memcg limit enforcement from zones to nodesMel Gorman2016-07-293-120/+95
| * | mm, vmscan: make shrink_node decisions more node-centricMel Gorman2016-07-294-32/+41
| * | mm: vmscan: do not reclaim from kswapd if there is any eligible zoneMel Gorman2016-07-291-32/+27
| * | mm, vmscan: remove duplicate logic clearing node congestion and dirty stateMel Gorman2016-07-291-12/+12
| * | mm, vmscan: by default have direct reclaim only shrink once per nodeMel Gorman2016-07-291-8/+14
| * | mm, vmscan: simplify the logic deciding whether kswapd sleepsMel Gorman2016-07-293-54/+54
| * | mm, vmscan: remove balance gapMel Gorman2016-07-291-11/+8
| * | mm, vmscan: make kswapd reclaim in terms of nodesMel Gorman2016-07-291-191/+101
| * | mm, vmscan: have kswapd only scan based on the highest requested zoneMel Gorman2016-07-291-5/+2
| * | mm, vmscan: begin reclaiming pages on a per-node basisMel Gorman2016-07-291-24/+55
| * | mm, vmscan: move LRU lists to nodeMel Gorman2016-07-2917-222/+270
| * | mm, vmscan: move lru_lock to the nodeMel Gorman2016-07-2910-62/+62
| * | mm, vmstat: add infrastructure for per-node vmstatsMel Gorman2016-07-293-29/+285
| * | mm, meminit: remove early_page_nid_uninitialisedMel Gorman2016-07-291-13/+0
| * | mm/compaction: remove unnecessary order check in try_to_compact_pages()Ganesh Mahendran2016-07-291-1/+1
| * | mm: fix vm-scalability regression in cgroup-aware workingset codeJohannes Weiner2016-07-292-46/+6
| * | mm: update the comment in __isolate_free_pagezhong jiang2016-07-291-1/+4
| * | mm, oom: tighten task_will_free_mem() lockingMichal Hocko2016-07-291-26/+15
| * | mm, oom: hide mm which is shared with kthread or global initMichal Hocko2016-07-291-4/+21
| * | mm, oom_reaper: do not attempt to reap a task more than twiceMichal Hocko2016-07-291-0/+19