summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/preempt.h (follow)
Commit message (Expand)AuthorAgeFilesLines
* sched/core: Initialize the idle task with preemption disabledValentin Schneider2021-05-121-1/+1
* sched,x86: Allow !PREEMPT_DYNAMICPeter Zijlstra2021-02-171-6/+18
* sched: Harden PREEMPT_DYNAMICPeter Zijlstra2021-02-171-2/+2
* preempt/dynamic: Provide preempt_schedule[_notrace]() static callsPeter Zijlstra (Intel)2021-02-171-8/+26
* x86/entry: Rename ___preempt_schedulePeter Zijlstra2020-03-211-4/+4
* x86: Use CONFIG_PREEMPTIONThomas Gleixner2019-07-311-1/+1
* preempt: Move PREEMPT_NEED_RESCHED definition into arch codeWill Deacon2018-12-071-0/+3
* x86/asm: 'Simplify' GEN_*_RMWcc() macrosPeter Zijlstra2018-10-161-1/+1
* License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2017-11-021-0/+1
* x86/asm: Fix inline asm call constraints for ClangJosh Poimboeuf2017-09-231-10/+5
* sched/x86: Do not clear PREEMPT_NEED_RESCHED on preempt count resetMartin Schwidefsky2016-11-161-1/+7
* x86, asm: change the GEN_*_RMWcc() macros to not quote the conditionH. Peter Anvin2016-06-081-1/+1
* sched/x86: Add stack frame dependency to __preempt_schedule[_notrace]()Josh Poimboeuf2016-02-241-2/+11
* sched/core, sched/x86: Kill thread_info::saved_preempt_countPeter Zijlstra2015-10-061-4/+1
* sched/core: Create preempt_count invariantPeter Zijlstra2015-10-061-1/+1
* sched/preempt: Fix cond_resched_lock() and cond_resched_softirq()Konstantin Khlebnikov2015-08-031-2/+2
* preempt: Use preempt_schedule_context() as the official tracing preemption pointFrederic Weisbecker2015-06-071-5/+3
* sched: Kill task_preempt_count()Oleg Nesterov2014-10-281-3/+0
* sched: stop the unbound recursion in preempt_schedule_context()Oleg Nesterov2014-10-281-0/+1
* percpu: add raw_cpu_opsChristoph Lameter2014-04-081-8/+8
* sched: Remove PREEMPT_NEED_RESCHED from generic codePeter Zijlstra2013-12-111-0/+11
* sched: Revert need_resched() to look at TIF_NEED_RESCHEDPeter Zijlstra2013-09-281-8/+0
* sched, x86: Optimize the preempt_schedule() callPeter Zijlstra2013-09-251-0/+10
* sched, x86: Provide a per-cpu preempt_count implementationPeter Zijlstra2013-09-251-0/+98