summaryrefslogtreecommitdiffstats
path: root/kernel (follow)
Commit message (Expand)AuthorAgeFilesLines
* sched: Remove get_online_cpus() usagePeter Zijlstra2013-10-163-15/+48
* sched: Fix race in migrate_swap_stop()Peter Zijlstra2013-10-163-9/+22
* sched/rt: Add missing rmb()Peter Zijlstra2013-10-161-1/+9
* sched/fair: Fix trivial typos in commentsKamalesh Babulal2013-10-141-2/+2
* sched: Remove bogus parameter in structured commentRamkumar Ramachandra2013-10-121-1/+0
* Merge branch 'core/urgent' into sched/coreIngo Molnar2013-10-111-3/+3
|\
| * Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/ke...Linus Torvalds2013-10-081-3/+3
| |\
| | * perf: Fix perf_pmu_migrate_contextPeter Zijlstra2013-10-041-3/+3
* | | sched/numa: Reflow task_numa_group() to avoid a compiler warningPeter Zijlstra2013-10-091-7/+11
* | | sched/numa: Retry task_numa_migrate() periodicallyRik van Riel2013-10-091-9/+13
* | | sched/numa: Use unsigned longs for numa group fault statsMel Gorman2013-10-091-29/+20
* | | sched/numa: Skip some page migrations after a shared faultRik van Riel2013-10-092-0/+15
* | | mm: numa: Revert temporarily disabling of NUMA migrationRik van Riel2013-10-092-26/+1
* | | sched/numa: Remove the numa_balancing_scan_period_reset sysctlMel Gorman2013-10-093-25/+1
* | | sched/numa: Adjust scan rate in task_numa_placementRik van Riel2013-10-091-25/+87
* | | sched/numa: Take false sharing into account when adapting scan rateMel Gorman2013-10-091-2/+6
* | | sched/numa: Be more careful about joining numa groupsRik van Riel2013-10-091-5/+11
* | | sched/numa: Avoid migrating tasks that are placed on their preferred nodePeter Zijlstra2013-10-093-12/+142
* | | sched/numa: Fix task or group comparisonRik van Riel2013-10-091-7/+25
* | | sched/numa: Decide whether to favour task or group weights based on swap cand...Rik van Riel2013-10-091-23/+36
* | | sched/numa: Add debuggingIngo Molnar2013-10-092-3/+62
* | | sched/numa: Prevent parallel updates to group stats during placementMel Gorman2013-10-091-12/+23
* | | sched/numa: Call task_numa_free() from do_execve()Rik van Riel2013-10-092-6/+8
* | | sched/numa: Use group fault statistics in numa placementMel Gorman2013-10-091-17/+107
* | | sched/numa: Stay on the same node if CLONE_VMRik van Riel2013-10-092-6/+10
* | | mm: numa: Do not group on RO pagesPeter Zijlstra2013-10-091-2/+3
* | | sched/numa: Report a NUMA task group IDMel Gorman2013-10-091-0/+7
* | | sched/numa: Use {cpu, pid} to create task groups for shared faultsPeter Zijlstra2013-10-093-13/+160
* | | mm: numa: Change page last {nid,pid} into {cpu,pid}Peter Zijlstra2013-10-092-3/+7
* | | sched/numa: Fix placement of workloads spread across multiple nodesRik van Riel2013-10-091-6/+5
* | | sched/numa: Favor placing a task on the preferred nodeMel Gorman2013-10-091-19/+35
* | | sched/numa: Use a system-wide search to find swap/migration candidatesMel Gorman2013-10-093-71/+199
* | | sched/numa: Introduce migrate_swap()Peter Zijlstra2013-10-096-14/+108
* | | stop_machine: Introduce stop_two_cpus()Peter Zijlstra2013-10-091-98/+174
* | | sched/numa: Do not trap hinting faults for shared librariesMel Gorman2013-10-091-0/+10
* | | sched/numa: Increment numa_migrate_seq when task runs in correct locationRik van Riel2013-10-091-1/+9
* | | sched/numa: Retry migration of tasks to CPU on a preferred nodeMel Gorman2013-10-091-7/+23
* | | sched/numa: Avoid overloading CPUs on a preferred NUMA nodeMel Gorman2013-10-091-29/+102
* | | mm: numa: Limit NUMA scanning to migrate-on-fault VMAsMel Gorman2013-10-091-1/+1
* | | sched/numa: Do not migrate memory immediately after switching nodeRik van Riel2013-10-092-3/+17
* | | sched/numa: Set preferred NUMA node based on number of private faultsMel Gorman2013-10-091-3/+9
* | | sched/numa: Remove check that skips small VMAsMel Gorman2013-10-091-4/+0
* | | sched/numa: Check current->mm before allocating NUMA faultsMel Gorman2013-10-091-2/+4
* | | sched/numa: Add infrastructure for split shared/private accounting of NUMA hi...Mel Gorman2013-10-091-11/+35
* | | sched/numa: Reschedule task on preferred NUMA node once selectedMel Gorman2013-10-093-1/+65
* | | sched/numa: Resist moving tasks towards nodes with fewer hinting faultsMel Gorman2013-10-092-0/+41
* | | sched/numa: Favour moving tasks towards the preferred nodeMel Gorman2013-10-094-5/+75
* | | sched/numa: Update NUMA hinting faults once per scanMel Gorman2013-10-092-3/+14
* | | sched/numa: Select a preferred node with the most numa hinting faultsMel Gorman2013-10-092-2/+16
* | | sched/numa: Track NUMA hinting faults on per-node basisMel Gorman2013-10-093-1/+25