summaryrefslogtreecommitdiffstats
path: root/kernel/sched/fair.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* sched/fair: Fix new task's load avg removed from source CPU in wake_up_new_ta...Yuyang Du2016-01-061-10/+28
* Merge branch 'sched/urgent' into sched/core, to pick up fixes before merging ...Ingo Molnar2016-01-061-2/+2
|\
| * sched/fair: Fix multiplication overflow on 32-bit systemsAndrey Ryabinin2016-01-061-1/+1
| * treewide: Remove old email addressPeter Zijlstra2015-11-231-1/+1
* | sched/fair: Disable the task group load_avg update for the root_task_groupWaiman Long2015-12-041-0/+6
* | sched/fair: Avoid redundant idle_cpu() call in update_sg_lb_stats()Waiman Long2015-12-041-3/+7
* | sched/fair: Make it possible to account fair load avg consistentlyByungchul Park2015-12-041-0/+46
* | sched/fair: Modify the comment about lock assumptions in migrate_task_rq_fair()Byungchul Park2015-11-231-2/+1
* | sched/core: Fix incorrect wait time and wait count statisticsJoonwoo Park2015-11-231-20/+47
* | sched/numa: Cap PTE scanning overhead to 3% of run timeRik van Riel2015-11-231-0/+12
* | sched/fair: Consider missed ticks in NOHZ_FULL in update_cpu_load_nohz()Byungchul Park2015-11-231-4/+6
* | sched/fair: Prepare __update_cpu_load() to handle active ticklessByungchul Park2015-11-231-8/+41
* | sched/fair: Clean up the explanation around decaying load update missesPeter Zijlstra2015-11-231-29/+24
* | sched/fair: Remove empty idle enter and exit functionsDietmar Eggemann2015-11-231-23/+1
|/
* sched/numa: Fix math underflow in task_tick_numa()Rik van Riel2015-11-091-1/+1
* Merge branch 'sched/urgent' into sched/core, to pick up fixes and resolve con...Ingo Molnar2015-10-201-4/+5
|\
| * sched/fair: Update task group's load_avg after task migrationYuyang Du2015-10-201-2/+3
| * sched/fair: Fix overly small weight for interactive group entitiesYuyang Du2015-10-201-2/+2
* | sched/core: Remove a parameter in the migrate_task_rq() functionxiaofeng.yan2015-10-061-1/+1
* | sched/numa: Fix task_tick_fair() from disabling numa_balancingSrikar Dronamraju2015-10-061-1/+1
* | sched/fair: Remove unnecessary parameter for group_classify()Leo Yan2015-09-181-5/+5
* | sched/fair: Polish comments for LOAD_AVG_MAXLeo Yan2015-09-181-2/+3
* | sched/numa: Limit the amount of virtual memory scanned in task_numa_work()Rik van Riel2015-09-181-6/+12
* | sched/fair: Optimize per entity utilization trackingPeter Zijlstra2015-09-131-7/+10
* | sched/fair: Defer calling scaling functionsDietmar Eggemann2015-09-131-2/+4
* | sched/fair: Optimize __update_load_avg()Peter Zijlstra2015-09-131-1/+1
* | sched/fair: Rename scale() to cap_scale()Peter Zijlstra2015-09-131-7/+7
* | sched/fair: Get rid of scaling utilization by capacity_origDietmar Eggemann2015-09-131-16/+22
* | sched/fair: Name utilization related data and functions consistentlyDietmar Eggemann2015-09-131-18/+19
* | sched/fair: Make utilization tracking CPU scale-invariantDietmar Eggemann2015-09-131-3/+4
* | sched/fair: Convert arch_scale_cpu_capacity() from weak function to #defineMorten Rasmussen2015-09-131-21/+1
* | sched/fair: Make load tracking frequency scale-invariantDietmar Eggemann2015-09-131-10/+17
* | sched/numa: Convert sched_numa_balancing to a static_branchSrikar Dronamraju2015-09-131-3/+3
* | sched/numa: Disable sched_numa_balancing on UMA systemsSrikar Dronamraju2015-09-131-2/+2
* | sched/numa: Rename numabalancing_enabled to sched_numa_balancingSrikar Dronamraju2015-09-131-2/+2
* | sched/fair: Fix nohz.next_balance updateVincent Guittot2015-09-131-4/+30
* | sched/core: Remove unused argument from sched_class::task_move_groupPeter Zijlstra2015-09-131-1/+1
* | sched/fair: Unify switched_{from,to}_fair() and task_move_group_fair()Byungchul Park2015-09-131-77/+52
* | sched/fair: Make the entity load aging on attaching tunablePeter Zijlstra2015-09-131-0/+4
* | sched/fair: Fix switched_to_fair()'s per entity load trackingByungchul Park2015-09-131-0/+23
* | sched/fair: Have task_move_group_fair() also detach entity load from the old ...Byungchul Park2015-09-131-1/+5
* | sched/fair: Have task_move_group_fair() unconditionally add the entity load t...Byungchul Park2015-09-131-5/+4
* | sched/fair: Factor out the {at,de}taching of the per entity load {to,from} th...Byungchul Park2015-09-131-39/+38
|/
* sched: Make sched_class::set_cpus_allowed() unconditionalPeter Zijlstra2015-08-121-0/+1
* sched: Ensure a task has a non-normalized vruntime when returning back to CFSByungchul Park2015-08-121-2/+17
* sched/fair: Clean up load average referencesYuyang Du2015-08-031-15/+29
* sched/fair: Provide runnable_load_avg back to cfs_rqYuyang Du2015-08-031-10/+45
* sched/fair: Remove task and group entity load when they are deadYuyang Du2015-08-031-1/+10
* sched/fair: Init cfs_rq's sched_entity load averageYuyang Du2015-08-031-5/+6
* sched/fair: Implement update_blocked_averages() for CONFIG_FAIR_GROUP_SCHED=nVincent Guittot2015-08-031-0/+8