| Commit message (Expand) | Author | Age | Files | Lines |
... | |
| * | mm, page_alloc: fix dirtyable highmem calculation | Minchan Kim | 2016-07-29 | 1 | -6/+10 |
| * | mm, vmstat: remove zone and node double accounting by approximating retries | Mel Gorman | 2016-07-29 | 9 | -50/+84 |
| * | mm, vmstat: print node-based stats in zoneinfo file | Mel Gorman | 2016-07-29 | 1 | -0/+24 |
| * | mm: vmstat: account per-zone stalls and pages skipped during reclaim | Mel Gorman | 2016-07-29 | 3 | -4/+18 |
| * | mm: vmstat: replace __count_zone_vm_events with a zone id equivalent | Mel Gorman | 2016-07-29 | 2 | -4/+3 |
| * | mm: page_alloc: cache the last node whose dirty limit is reached | Mel Gorman | 2016-07-29 | 1 | -2/+11 |
| * | mm, page_alloc: remove fair zone allocation policy | Mel Gorman | 2016-07-29 | 4 | -83/+2 |
| * | mm, vmscan: add classzone information to tracepoints | Mel Gorman | 2016-07-29 | 2 | -25/+40 |
| * | mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads... | Mel Gorman | 2016-07-29 | 1 | -8/+14 |
| * | mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep() | Mel Gorman | 2016-07-29 | 1 | -8/+4 |
| * | mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_ready | Mel Gorman | 2016-07-29 | 1 | -20/+7 |
| * | mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_node | Mel Gorman | 2016-07-29 | 1 | -11/+9 |
| * | mm: convert zone_reclaim to node_reclaim | Mel Gorman | 2016-07-29 | 8 | -69/+77 |
| * | mm, page_alloc: wake kswapd based on the highest eligible zone | Mel Gorman | 2016-07-29 | 1 | -1/+1 |
| * | mm, vmscan: only wakeup kswapd once per node for the requested classzone | Mel Gorman | 2016-07-29 | 2 | -4/+17 |
| * | mm: move vmscan writes and file write accounting to the node | Mel Gorman | 2016-07-29 | 5 | -15/+15 |
| * | mm: move most file-based accounting to the node | Mel Gorman | 2016-07-29 | 24 | -162/+155 |
| * | mm: rename NR_ANON_PAGES to NR_ANON_MAPPED | Mel Gorman | 2016-07-29 | 5 | -8/+8 |
| * | mm: move page mapped accounting to the node | Mel Gorman | 2016-07-29 | 8 | -21/+21 |
| * | mm, page_alloc: consider dirtyable memory in terms of nodes | Mel Gorman | 2016-07-29 | 4 | -52/+79 |
| * | mm, workingset: make working set detection node-aware | Mel Gorman | 2016-07-29 | 4 | -44/+26 |
| * | mm, memcg: move memcg limit enforcement from zones to nodes | Mel Gorman | 2016-07-29 | 5 | -144/+111 |
| * | mm, vmscan: make shrink_node decisions more node-centric | Mel Gorman | 2016-07-29 | 7 | -44/+54 |
| * | mm: vmscan: do not reclaim from kswapd if there is any eligible zone | Mel Gorman | 2016-07-29 | 1 | -32/+27 |
| * | mm, vmscan: remove duplicate logic clearing node congestion and dirty state | Mel Gorman | 2016-07-29 | 1 | -12/+12 |
| * | mm, vmscan: by default have direct reclaim only shrink once per node | Mel Gorman | 2016-07-29 | 1 | -8/+14 |
| * | mm, vmscan: simplify the logic deciding whether kswapd sleeps | Mel Gorman | 2016-07-29 | 4 | -56/+57 |
| * | mm, vmscan: remove balance gap | Mel Gorman | 2016-07-29 | 2 | -20/+8 |
| * | mm, vmscan: make kswapd reclaim in terms of nodes | Mel Gorman | 2016-07-29 | 1 | -191/+101 |
| * | mm, vmscan: have kswapd only scan based on the highest requested zone | Mel Gorman | 2016-07-29 | 1 | -5/+2 |
| * | mm, vmscan: begin reclaiming pages on a per-node basis | Mel Gorman | 2016-07-29 | 1 | -24/+55 |
| * | mm, mmzone: clarify the usage of zone padding | Mel Gorman | 2016-07-29 | 1 | -3/+4 |
| * | mm, vmscan: move LRU lists to node | Mel Gorman | 2016-07-29 | 29 | -300/+386 |
| * | mm, vmscan: move lru_lock to the node | Mel Gorman | 2016-07-29 | 14 | -69/+75 |
| * | mm, vmstat: add infrastructure for per-node vmstats | Mel Gorman | 2016-07-29 | 7 | -76/+424 |
| * | mm, meminit: remove early_page_nid_uninitialised | Mel Gorman | 2016-07-29 | 1 | -13/+0 |
| * | cpuset, mm: fix TIF_MEMDIE check in cpuset_change_task_nodemask | Michal Hocko | 2016-07-29 | 1 | -9/+0 |
| * | freezer, oom: check TIF_MEMDIE on the correct task | Michal Hocko | 2016-07-29 | 1 | -1/+1 |
| * | mm/compaction: remove unnecessary order check in try_to_compact_pages() | Ganesh Mahendran | 2016-07-29 | 1 | -1/+1 |
| * | mm: fix vm-scalability regression in cgroup-aware workingset code | Johannes Weiner | 2016-07-29 | 4 | -47/+58 |
| * | mm: update the comment in __isolate_free_page | zhong jiang | 2016-07-29 | 1 | -1/+4 |
| * | mm, oom: tighten task_will_free_mem() locking | Michal Hocko | 2016-07-29 | 1 | -26/+15 |
| * | mm, oom: hide mm which is shared with kthread or global init | Michal Hocko | 2016-07-29 | 1 | -4/+21 |
| * | mm, oom_reaper: do not attempt to reap a task more than twice | Michal Hocko | 2016-07-29 | 2 | -0/+20 |
| * | mm, oom: task_will_free_mem should skip oom_reaped tasks | Michal Hocko | 2016-07-29 | 1 | -0/+10 |
| * | mm, oom: fortify task_will_free_mem() | Michal Hocko | 2016-07-29 | 3 | -78/+85 |
| * | mm, oom: kill all tasks sharing the mm | Michal Hocko | 2016-07-29 | 1 | -2/+1 |
| * | mm, oom: skip vforked tasks from being selected | Michal Hocko | 2016-07-29 | 2 | -2/+30 |
| * | mm, oom_adj: make sure processes sharing mm have same view of oom_score_adj | Michal Hocko | 2016-07-29 | 3 | -1/+49 |
| * | proc, oom_adj: extract oom_score_adj setting into a helper | Michal Hocko | 2016-07-29 | 1 | -51/+43 |