summaryrefslogtreecommitdiffstats
path: root/kernel/sched_fair.c (follow)
Commit message (Expand)AuthorAgeFilesLines
* sched: Fix vmark regression on big machinesMike Galbraith2010-01-211-1/+1
* sched: Remove the cfs_rq dependency from set_task_cpu()Peter Zijlstra2009-12-161-6/+44
* sched: Select_task_rq_fair() must honour SD_LOAD_BALANCEPeter Zijlstra2009-12-161-0/+3
* sched: Convert rq->lock to raw_spinlockThomas Gleixner2009-12-141-2/+2
* sched: Update normalized values on user updates via procChristian Ehrhardt2009-12-091-1/+10
* sched: Make tunable scaling style configurableChristian Ehrhardt2009-12-091-0/+13
* sched: Fix missing sched tunable recalculation on cpu add/removeChristian Ehrhardt2009-12-091-0/+16
* sched: Remove unnecessary RCU exclusionPeter Zijlstra2009-12-091-7/+2
* sched: Discard some old bitsPeter Zijlstra2009-12-091-3/+0
* sched: Clean up check_preempt_wakeup()Peter Zijlstra2009-12-091-40/+33
* sched: Move update_curr() in check_preempt_wakeup() to avoid redundant callJupyung Lee2009-12-091-2/+2
* sched: Sanitize fork() handlingPeter Zijlstra2009-12-091-13/+15
* sched: Protect sched_rr_get_param() access to task->sched_classThomas Gleixner2009-12-091-5/+1
* Merge branch 'sched/urgent' into sched/coreIngo Molnar2009-11-261-27/+47
|\
| * sched: Strengthen buddies and mitigate buddy induced latenciesMike Galbraith2009-10-231-26/+47
| * sched: Do less agressive buddy clearingPeter Zijlstra2009-10-141-14/+13
* | sched: Optimize branch hint in pick_next_task_fair()Tim Blechmann2009-11-241-1/+1
* | sched: More generic WAKE_AFFINE vs select_idle_sibling()Peter Zijlstra2009-11-131-17/+16
* | sched: Cleanup select_task_rq_fair()Peter Zijlstra2009-11-131-22/+51
* | sched: Fix affinity logic in select_task_rq_fair()Mike Galbraith2009-11-051-0/+2
* | sched: Check for an idle shared cache in select_task_rq_fair()Mike Galbraith2009-11-041-4/+29
|/
* sysctl: remove "struct file *" argument of ->proc_handlerAlexey Dobriyan2009-09-241-2/+2
* Merge branch 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/ke...Linus Torvalds2009-09-211-23/+42
|\
| * sched: Simplify sys_sched_rr_get_interval() system callPeter Williams2009-09-211-0/+21
| * sched: Re-add lost cpu_allowed check to sched_fair.c::select_task_rq_fair()Mike Galbraith2009-09-191-1/+2
| * sched: Remove unneeded indentation in sched_fair.c::place_entity()Mike Galbraith2009-09-181-22/+19
* | Merge branch 'linus' into perfcounters/coreIngo Molnar2009-09-191-153/+261
|\|
| * sched: Fix SD_POWERSAVING_BALANCE|SD_PREFER_LOCAL vs SD_WAKE_AFFINEPeter Zijlstra2009-09-171-15/+27
| * sched: Stop buddies from hogging the systemPeter Zijlstra2009-09-171-3/+8
| * sched: Add new wakeup preemption mode: WAKEUP_RUNNINGPeter Zijlstra2009-09-171-3/+11
| * sched: Rename flags to wake_flagsPeter Zijlstra2009-09-161-3/+3
| * sched: Clean up the load_idx selection in select_task_rq_fairPeter Zijlstra2009-09-161-19/+8
| * sched: Optimize cgroup vs wakeup a bitPeter Zijlstra2009-09-161-14/+9
| * sched: Implement a gentler fair-sleepers featureIngo Molnar2009-09-161-1/+8
| * sched: Add SD_PREFER_LOCALPeter Zijlstra2009-09-161-2/+5
| * sched: Add a few SYNC hint knobs to play withPeter Zijlstra2009-09-151-3/+11
| * sched: Add WF_FORKPeter Zijlstra2009-09-151-1/+1
| * sched: Rename sync argumentsPeter Zijlstra2009-09-151-2/+4
| * sched: Rename select_task_rq() argumentPeter Zijlstra2009-09-151-7/+7
| * sched: Tweak wake_idxPeter Zijlstra2009-09-151-3/+18
| * sched: Fix task affinity for select_task_rq_fairPeter Zijlstra2009-09-151-3/+2
| * sched: for_each_domain() vs RCUPeter Zijlstra2009-09-151-2/+7
| * sched: Weaken SD_POWERSAVINGS_BALANCEPeter Zijlstra2009-09-151-3/+18
| * sched: Merge select_task_rq_fair() and sched_balance_self()Peter Zijlstra2009-09-151-170/+63
| * sched: Hook sched_balance_self() into sched_class::select_task_rq()Peter Zijlstra2009-09-151-1/+6
| * sched: Move sched_balance_self() into sched_fair.cPeter Zijlstra2009-09-151-0/+145
| * sched: Complete buddy switchesMike Galbraith2009-09-151-1/+2
| * sched: Split WAKEUP_OVERLAPPeter Zijlstra2009-09-151-3/+4
* | perf_counter, sched: Add sched_stat_runtime tracepointIngo Molnar2009-09-131-0/+1
|/
* sched: Fix sched::sched_stat_wait tracepoint fieldIngo Molnar2009-09-101-2/+1