summaryrefslogtreecommitdiffstats
path: root/include (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'timers-core-for-linus' of ↵Linus Torvalds2019-09-1713-86/+223
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull core timer updates from Thomas Gleixner: "Timers and timekeeping updates: - A large overhaul of the posix CPU timer code which is a preparation for moving the CPU timer expiry out into task work so it can be properly accounted on the task/process. An update to the bogus permission checks will come later during the merge window as feedback was not complete before heading of for travel. - Switch the timerqueue code to use cached rbtrees and get rid of the homebrewn caching of the leftmost node. - Consolidate hrtimer_init() + hrtimer_init_sleeper() calls into a single function - Implement the separation of hrtimers to be forced to expire in hard interrupt context even when PREEMPT_RT is enabled and mark the affected timers accordingly. - Implement a mechanism for hrtimers and the timer wheel to protect RT against priority inversion and live lock issues when a (hr)timer which should be canceled is currently executing the callback. Instead of infinitely spinning, the task which tries to cancel the timer blocks on a per cpu base expiry lock which is held and released by the (hr)timer expiry code. - Enable the Hyper-V TSC page based sched_clock for Hyper-V guests resulting in faster access to timekeeping functions. - Updates to various clocksource/clockevent drivers and their device tree bindings. - The usual small improvements all over the place" * 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (101 commits) posix-cpu-timers: Fix permission check regression posix-cpu-timers: Always clear head pointer on dequeue hrtimer: Add a missing bracket and hide `migration_base' on !SMP posix-cpu-timers: Make expiry_active check actually work correctly posix-timers: Unbreak CONFIG_POSIX_TIMERS=n build tick: Mark sched_timer to expire in hard interrupt context hrtimer: Add kernel doc annotation for HRTIMER_MODE_HARD x86/hyperv: Hide pv_ops access for CONFIG_PARAVIRT=n posix-cpu-timers: Utilize timerqueue for storage posix-cpu-timers: Move state tracking to struct posix_cputimers posix-cpu-timers: Deduplicate rlimit handling posix-cpu-timers: Remove pointless comparisons posix-cpu-timers: Get rid of 64bit divisions posix-cpu-timers: Consolidate timer expiry further posix-cpu-timers: Get rid of zero checks rlimit: Rewrite non-sensical RLIMIT_CPU comment posix-cpu-timers: Respect INFINITY for hard RTTIME limit posix-cpu-timers: Switch thread group sampling to array posix-cpu-timers: Restructure expiry array posix-cpu-timers: Remove cputime_expires ...
| * posix-cpu-timers: Always clear head pointer on dequeueThomas Gleixner2019-09-051-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The head pointer in struct cpu_timer is checked to be NULL in posix_cpu_timer_del() when the delete raced with the exit cleanup. The works correctly as long as the timer is actually dequeued via posix_cpu_timers_exit*(). But if the timer was dequeued due to expiry the head pointer is still set and triggers the warning. In fact keeping the head pointer around after any dequeue is pointless as is has no meaning at all after that. Clear the head pointer always on dequeue and remove the unused requeue function while at it. Fixes: 60bda037f1dd ("posix-cpu-timers: Utilize timerqueue for storage") Reported-by: syzbot+55acd54b57bb4b3840a4@syzkaller.appspotmail.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190905120539.707986830@linutronix.de
| * posix-timers: Unbreak CONFIG_POSIX_TIMERS=n buildThomas Gleixner2019-08-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The rework of the posix-cpu-timers patch series dropped the empty declaration of struct cpu_timer for the CONFIG_POSIX_TIMERS=n case which causes the build to fail: ./include/linux/posix-timers.h:218:20: error: field 'cpu' has incomplete type Add it back. Fixes: 60bda037f1dd ("posix-cpu-timers: Utilize timerqueue for storage") Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * hrtimer: Add kernel doc annotation for HRTIMER_MODE_HARDSebastian Andrzej Siewior2019-08-281-0/+2
| | | | | | | | | | | | | | | | | | | | Add kernel doc annotation for HRTIMER_MODE_HARD. Fixes: ae6683d815895 ("hrtimer: Introduce HARD expiry mode") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190823113845.12125-2-bigeasy@linutronix.de
| * posix-cpu-timers: Utilize timerqueue for storageThomas Gleixner2019-08-282-16/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using a linear O(N) search for timer insertion affects execution time and D-cache footprint badly with a larger number of timers. Switch the storage to a timerqueue which is already used for hrtimers and alarmtimers. It does not affect the size of struct k_itimer as it.alarm is still larger. The extra list head for the expiry list will go away later once the expiry is moved into task work context. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1908272129220.1939@nanos.tec.linutronix.de
| * posix-cpu-timers: Move state tracking to struct posix_cputimersThomas Gleixner2019-08-283-9/+14
| | | | | | | | | | | | | | | | Put it where it belongs and clean up the ifdeffery in fork completely. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190821192922.743229404@linutronix.de
| * posix-cpu-timers: Get rid of zero checksThomas Gleixner2019-08-281-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Deactivation of the expiry cache is done by setting all clock caches to 0. That requires to have a check for zero in all places which update the expiry cache: if (cache == 0 || new < cache) cache = new; Use U64_MAX as the deactivated value, which allows to remove the zero checks when updating the cache and reduces it to the obvious check: if (new < cache) cache = new; This also removes the weird workaround in do_prlimit() which was required to convert a RLIMIT_CPU value of 0 (immediate expiry) to 1 because handing in 0 to the posix CPU timer code would have effectively disarmed it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192922.275086128@linutronix.de
| * posix-cpu-timers: Switch thread group sampling to arrayThomas Gleixner2019-08-281-1/+1
| | | | | | | | | | | | | | | | | | That allows more simplifications in various places. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192921.988426956@linutronix.de
| * posix-cpu-timers: Restructure expiry arrayThomas Gleixner2019-08-281-14/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the abused struct task_cputime is gone, it's more natural to bundle the expiry cache and the list head of each clock into a struct and have an array of those structs. Follow the hrtimer naming convention of 'bases' and rename the expiry cache to 'nextevt' and adapt all usage sites. Generates also better code .text size shrinks by 80 bytes. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1908262021140.1939@nanos.tec.linutronix.de
| * posix-cpu-timers: Remove cputime_expiresThomas Gleixner2019-08-281-7/+2
| | | | | | | | | | | | | | | | | | | | The last users of the magic struct cputime based expiry cache are gone. Remove the leftovers. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192921.790209622@linutronix.de
| * posix-cpu-timers: Remove the odd field rename definesThomas Gleixner2019-08-281-15/+0
| | | | | | | | | | | | | | | | | | | | The last users of the odd define based renaming of struct task_cputime members are gone. Good riddance. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192921.499058279@linutronix.de
| * posix-cpu-timers: Provide array based access to expiry cacheThomas Gleixner2019-08-282-8/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using struct task_cputime for the expiry cache is a pretty odd choice and comes with magic defines to rename the fields for usage in the expiry cache. struct task_cputime is basically a u64 array with 3 members, but it has distinct members. The expiry cache content is different than the content of task_cputime because expiry[PROF] = task_cputime.stime + task_cputime.utime expiry[VIRT] = task_cputime.utime expiry[SCHED] = task_cputime.sum_exec_runtime So there is no direct mapping between task_cputime and the expiry cache and the #define based remapping is just a horrible hack. Having the expiry cache array based allows further simplification of the expiry code. To avoid an all in one cleanup which is hard to review add a temporary anonymous union into struct task_cputime which allows array based access to it. That requires to reorder the members. Add a build time sanity check to validate that the members are at the same place. The union and the build time checks will be removed after conversion. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192921.105793824@linutronix.de
| * posix-cpu-timers: Move expiry cache into struct posix_cputimersThomas Gleixner2019-08-283-11/+22
| | | | | | | | | | | | | | | | | | | | The expiry cache belongs into the posix_cputimers container where the other cpu timers information is. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192921.014444012@linutronix.de
| * sched: Move struct task_cputime to types.hThomas Gleixner2019-08-282-16/+24
| | | | | | | | | | | | | | | | For upcoming posix-timer changes to avoid include recursion hell. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190821192920.909530418@linutronix.de
| * posix-cpu-timers: Create a container structThomas Gleixner2019-08-284-14/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Per task/process data of posix CPU timers is all over the place which makes the code hard to follow and requires ifdeffery. Create a container to hold all this information in one place, so data is consolidated and the ifdeffery can be confined to the posix timer header file and removed from places like fork. As a first step, move the cpu_timers list head array into the new struct and clean up the initializers and simplify fork. The remaining #ifdef in fork will be removed later. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192920.819418976@linutronix.de
| * posix-cpu-timers: Rename thread_group_cputimer() and make it staticThomas Gleixner2019-08-281-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | thread_group_cputimer() is a complete misnomer. The function does two things: - For arming process wide timers it makes sure that the atomic time storage is up to date. If no cpu timer is armed yet, then the atomic time storage is not updated by the scheduler for performance reasons. In that case a full summing up of all threads needs to be done and the update needs to be enabled. - Samples the current time into the caller supplied storage. Rename it to thread_group_start_cputime(), make it static and fixup the callsite. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192919.869350319@linutronix.de
| * posix-cpu-timers: Provide quick sample function for itimerThomas Gleixner2019-08-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | get_itimer() needs a sample of the current thread group cputime. It invokes thread_group_cputimer() - which is a misnomer. That function also starts eventually the group cputime accouting which is bogus because the accounting is already active when a timer is armed. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190821192919.599658199@linutronix.de
| * clocksource/drivers/hyperv: Enable TSC page clocksource on 32bitVitaly Kuznetsov2019-08-231-5/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no particular reason to not enable TSC page clocksource on 32-bit. mul_u64_u64_shr() is available and despite the increased computational complexity (compared to 64bit) TSC page is still a huge win compared to MSR-based clocksource. In-kernel reads: MSR based clocksource: 3361 cycles TSC page clocksource: 49 cycles Reads from userspace (utilizing vDSO in case of TSC page): MSR based clocksource: 5664 cycles TSC page clocksource: 131 cycles Enabling TSC page on 32bits allows to get rid of CONFIG_HYPERV_TSCPAGE as it is now not any different from CONFIG_HYPERV_TIMER. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Link: https://lkml.kernel.org/r/20190822083630.17059-1-vkuznets@redhat.com
| * clocksource/drivers/hyperv: Add Hyper-V specific sched clock functionTianyu Lan2019-08-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hyper-V guests use the default native_sched_clock() in pv_ops.time.sched_clock on x86. But native_sched_clock() directly uses the raw TSC value, which can be discontinuous in a Hyper-V VM. Add the generic hv_setup_sched_clock() to set the sched clock function appropriately. On x86, this sets pv_ops.time.sched_clock to read the Hyper-V reference TSC value that is scaled and adjusted to be continuous. Also move the Hyper-V reference TSC initialization much earlier in the boot process so no discontinuity is observed when pv_ops.time.sched_clock calculates its offset. [ tglx: Folded build fix ] Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Link: https://lkml.kernel.org/r/20190814123216.32245-3-Tianyu.Lan@microsoft.com
| * posix-cpu-timers: Remove tsk argument from run_posix_cpu_timers()Thomas Gleixner2019-08-211-1/+1
| | | | | | | | | | | | | | | | | | It's always current. Don't give people wrong ideas. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190819143801.945469967@linutronix.de
| * alarmtimers: Avoid rtc.h includeThomas Gleixner2019-08-201-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | rtc.h is not needed in alarmtimers when a forward declaration of struct rtc_device is provided. That allows to include posix-timers.h without adding more includes to alarmtimer.h or creating circular include dependencies. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190819143801.565389536@linutronix.de
| * posix-timers: Cleanup forward declarations and includesThomas Gleixner2019-08-201-3/+2
| | | | | | | | | | | | | | | | | | | | | | - Rename struct siginfo to kernel_siginfo as that is used and required - Add a forward declaration for task_struct and remove sched.h include - Remove timex.h include as it is not needed Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lkml.kernel.org/r/20190819143801.472005793@linutronix.de
| * posix-timers: Move rcu_head out of it unionSebastian Andrzej Siewior2019-08-011-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Timer deletion on PREEMPT_RT is prone to priority inversion and live locks. The hrtimer code has a synchronization mechanism for this. Posix CPU timers will grow one. But that mechanism cannot be invoked while holding the k_itimer lock because that can deadlock against the running timer callback. So the lock must be dropped which allows the timer to be freed. The timer free can be prevented by taking RCU readlock before dropping the lock, but because the rcu_head is part of the 'it' union a concurrent free will overwrite the hrtimer on which the task is trying to synchronize. Move the rcu_head out of the union to prevent this. [ tglx: Fixed up kernel-doc. Rewrote changelog ] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190730223828.965541887@linutronix.de
| * timers: Prepare support for PREEMPT_RTAnna-Maria Gleixner2019-08-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When PREEMPT_RT is enabled, the soft interrupt thread can be preempted. If the soft interrupt thread is preempted in the middle of a timer callback, then calling del_timer_sync() can lead to two issues: - If the caller is on a remote CPU then it has to spin wait for the timer handler to complete. This can result in unbound priority inversion. - If the caller originates from the task which preempted the timer handler on the same CPU, then spin waiting for the timer handler to complete is never going to end. To avoid these issues, add a new lock to the timer base which is held around the execution of the timer callbacks. If del_timer_sync() detects that the timer callback is currently running, it blocks on the expiry lock. When the callback is finished, the expiry lock is dropped by the softirq thread which wakes up the waiter and the system makes progress. This addresses both the priority inversion and the life lock issues. This mechanism is not used for timers which are marked IRQSAFE as for those preemption is disabled accross the callback and therefore this situation cannot happen. The callbacks for such timers need to be individually audited for RT compliance. The same issue can happen in virtual machines when the vCPU which runs a timer callback is scheduled out. If a second vCPU of the same guest calls del_timer_sync() it will spin wait for the other vCPU to be scheduled back in. The expiry lock mechanism would avoid that. It'd be trivial to enable this when paravirt spinlocks are enabled in a guest, but it's not clear whether this is an actual problem in the wild, so for now it's an RT only mechanism. As the softirq thread can be preempted with PREEMPT_RT=y, the SMP variant of del_timer_sync() needs to be used on UP as well. [ tglx: Refactored it for mainline ] Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190726185753.832418500@linutronix.de
| * hrtimer: Prepare support for PREEMPT_RTAnna-Maria Gleixner2019-08-011-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When PREEMPT_RT is enabled, the soft interrupt thread can be preempted. If the soft interrupt thread is preempted in the middle of a timer callback, then calling hrtimer_cancel() can lead to two issues: - If the caller is on a remote CPU then it has to spin wait for the timer handler to complete. This can result in unbound priority inversion. - If the caller originates from the task which preempted the timer handler on the same CPU, then spin waiting for the timer handler to complete is never going to end. To avoid these issues, add a new lock to the timer base which is held around the execution of the timer callbacks. If hrtimer_cancel() detects that the timer callback is currently running, it blocks on the expiry lock. When the callback is finished, the expiry lock is dropped by the softirq thread which wakes up the waiter and the system makes progress. This addresses both the priority inversion and the life lock issues. The same issue can happen in virtual machines when the vCPU which runs a timer callback is scheduled out. If a second vCPU of the same guest calls hrtimer_cancel() it will spin wait for the other vCPU to be scheduled back in. The expiry lock mechanism would avoid that. It'd be trivial to enable this when paravirt spinlocks are enabled in a guest, but it's not clear whether this is an actual problem in the wild, so for now it's an RT only mechanism. [ tglx: Refactored it for mainline ] Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190726185753.737767218@linutronix.de
| * hrtimer: Make enqueue mode check work on RTThomas Gleixner2019-08-011-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hrtimer_start_range_ns() has a WARN_ONCE() which verifies that a timer which is marker for softirq expiry is not queued in the hard interrupt base and vice versa. When PREEMPT_RT is enabled, timers which are not explicitely marked to expire in hard interrupt context are deferrred to the soft interrupt. So the regular check would trigger. Change the check, so when PREEMPT_RT is enabled, it is verified that the timers marked for hard interrupt expiry are not tried to be queued for soft interrupt expiry or any of the unmarked and softirq marked is tried to be expired in hard interrupt context. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * hrtimer: Introduce HARD expiry modeSebastian Andrzej Siewior2019-08-011-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On PREEMPT_RT not all hrtimers can be expired in hard interrupt context even if that is perfectly fine on a PREEMPT_RT=n kernel, e.g. because they take regular spinlocks. Also for latency reasons PREEMPT_RT tries to defer most hrtimers' expiry into soft interrupt context. But there are hrtimers which must be expired in hard interrupt context even when PREEMPT_RT is enabled: - hrtimers which must expiry in hard interrupt context, e.g. scheduler, perf, watchdog related hrtimers - latency critical hrtimers, e.g. nanosleep, ..., kvm lapic timer Add a new mode flag HRTIMER_MODE_HARD which allows to mark these timers so PREEMPT_RT will not move them into softirq expiry mode. [ tglx: Split out of a larger combo patch. Added changelog ] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190726185752.981398465@linutronix.de
| * hrtimer: Provide hrtimer_sleeper_start_expires()Thomas Gleixner2019-08-011-0/+3
| | | | | | | | | | | | | | | | | | hrtimer_sleepers will gain a scheduling class dependent treatment on PREEMPT_RT. Create a wrapper around hrtimer_start_expires() to make that possible. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * hrtimer: Consolidate hrtimer_init() + hrtimer_init_sleeper() callsSebastian Andrzej Siewior2019-08-012-5/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hrtimer_init_sleeper() calls require prior initialisation of the hrtimer object which is embedded into the hrtimer_sleeper. Combine the initialization and spare a function call. Fixup all call sites. This is also a preparatory change for PREEMPT_RT to do hrtimer sleeper specific initializations of the embedded hrtimer without modifying any of the call sites. No functional change. [ anna-maria: Minor cleanups ] [ tglx: Adopted to the removal of the task argument of hrtimer_init_sleeper() and trivial polishing. Folded a fix from Stephen Rothwell for the vsoc code ] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190726185752.887468908@linutronix.de
| * hrtimer: Remove task argument from hrtimer_init_sleeper()Thomas Gleixner2019-07-302-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | All callers hand in 'current' and that's the only task pointer which actually makes sense. Remove the task argument and set current in the function. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190726185752.791885290@linutronix.de
| * lib/timerqueue: Rely on rbtree semantics for next timerDavidlohr Bueso2019-07-241-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify the timerqueue code by using cached rbtrees and rely on the tree leftmost node semantics to get the timer with earliest expiration time. This is a drop in conversion, and therefore semantics remain untouched. The runtime overhead of cached rbtrees is be pretty much the same as the current head->next method, noting that when removing the leftmost node, a common operation for the timerqueue, the rb_next(leftmost) is O(1) as well, so the next timer will either be the right node or its parent. Therefore no extra pointer chasing. Finally, the size of the struct timerqueue_head remains the same. Passes several hours of rcutorture. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190724152323.bojciei3muvfxalm@linux-r8p5
* | Merge branch 'irq-core-for-linus' of ↵Linus Torvalds2019-09-174-4/+41
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull core irq updates from Thomas Gleixner: "Updates from the irq departement: - Update the interrupt spreading code so it handles numa node with different CPU counts properly. - A large overhaul of the ARM GiCv3 driver to support new PPI and SPI ranges. - Conversion of all alloc_fwnode() users to use physical addresses instead of virtual addresses so the virtual addresses are not leaked. The physical address is sufficient to identify the associated interrupt chip. - Add support for Marvel MMP3, Amlogic Meson SM1 interrupt chips. - Enforce interrupt threading at compile time if RT is enabled. - Small updates and improvements all over the place" * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits) irqchip/gic-v3-its: Fix LPI release for Multi-MSI devices irqchip/uniphier-aidet: Use devm_platform_ioremap_resource() irqdomain: Add the missing assignment of domain->fwnode for named fwnode irqchip/mmp: Coexist with GIC root IRQ controller irqchip/mmp: Mask off interrupts from other cores irqchip/mmp: Add missing chained_irq_{enter,exit}() irqchip/mmp: Do not use of_address_to_resource() to get mux regs irqchip/meson-gpio: Add support for meson sm1 SoCs dt-bindings: interrupt-controller: New binding for the meson sm1 SoCs genirq/affinity: Remove const qualifier from node_to_cpumask argument genirq/affinity: Spread vectors on node according to nr_cpu ratio genirq/affinity: Improve __irq_build_affinity_masks() irqchip: Remove dev_err() usage after platform_get_irq() irqchip: Add include guard to irq-partition-percpu.h irqchip/mmp: Do not call irq_set_default_host() on DT platforms irqchip/gic-v3-its: Remove the redundant set_bit for lpi_map irqchip/gic-v3: Add quirks for HIP06/07 invalid GICD_TYPER erratum 161010803 irqchip/gic: Skip DT quirks when evaluating IIDR-based quirks irqchip/gic-v3: Warn about inconsistent implementations of extended ranges irqchip/gic-v3: Add EPPI range support ...
| * \ Merge tag 'irqchip-5.4' of ↵Thomas Gleixner2019-09-063-4/+37
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull irqchip updates for Linux 5.4 from Marc Zyngier: - Large GICv3 updates to support new PPI and SPI ranges - Conver all alloc_fwnode() users to use PAs instead of VAs - Add support for Marvell's MMP3 irqchip - Add support for Amlogic Meson SM1 - Various cleanups and fixes
| | * | irqchip: Add include guard to irq-partition-percpu.hMasahiro Yamada2019-08-201-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a header include guard just in case. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
| | * | irqchip/gic-v3: Warn about inconsistent implementations of extended rangesMarc Zyngier2019-08-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As is it usual for the GIC, it isn't disallowed to put together a system that is majorly inconsistent, with a distributor supporting the extended ranges while some of the CPUs don't. Kindly tell the user that things are sailing isn't going to be smooth. Signed-off-by: Marc Zyngier <maz@kernel.org>
| | * | irqchip/gic-v3: Add EPPI range supportMarc Zyngier2019-08-201-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Expand the pre-existing PPI support to be able to deal with the Extended PPI range (EPPI). This includes obtaining the number of PPIs from each individual redistributor, and compute the minimum set (just in case someone builds something really clever...). Signed-off-by: Marc Zyngier <maz@kernel.org>
| | * | irqchip/gic-v3: Add ESPI range supportMarc Zyngier2019-08-201-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the required support for the ESPI range, which behave exactly like the SPIs of old, only with new funky INTIDs. Signed-off-by: Marc Zyngier <maz@kernel.org>
| | * | irqdomain/debugfs: Use PAs to generate fwnode namesMarc Zyngier2019-08-071-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Booting a large arm64 server (HiSi D05) leads to the following shouting at boot time: [ 20.722132] debugfs: File 'irqchip@(____ptrval____)-3' in directory 'domains' already present! [ 20.730851] debugfs: File 'irqchip@(____ptrval____)-3' in directory 'domains' already present! [ 20.739560] debugfs: File 'irqchip@(____ptrval____)-3' in directory 'domains' already present! [ 20.748267] debugfs: File 'irqchip@(____ptrval____)-3' in directory 'domains' already present! [ 20.756975] debugfs: File 'irqchip@(____ptrval____)-3' in directory 'domains' already present! [ 20.765683] debugfs: File 'irqchip@(____ptrval____)-3' in directory 'domains' already present! [ 20.774391] debugfs: File 'irqchip@(____ptrval____)-3' in directory 'domains' already present! and many more... Evidently, we expect something a bit more informative than ____ptrval____, and certainly we want all of our domains, not just the first one. For that, turn the %p used to generate the fwnode name into something that won't be repainted (%pa). Given that we've now fixed all users to pass a pointer to a PA, it will actually do the right thing. Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Marc Zyngier <maz@kernel.org>
| * | | genirq: Force interrupt threading on RTThomas Gleixner2019-08-191-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Switch force_irqthreads from a boot time modifiable variable to a compile time constant when CONFIG_PREEMPT_RT is enabled. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190816160923.12855-1-bigeasy@linutronix.de
* | | | Merge branch 'smp-hotplug-for-linus' of ↵Linus Torvalds2019-09-172-9/+55
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull CPU hotplug updates from Thomas Gleixner: "A small update for the SMP hotplug code code: - Track "booted once" CPUs in a cpumask so the x86 APIC code has an easy way to decide whether broadcast IPIs are safe to use or not. - Implement a cpumask_or_equal() helper for the IPI broadcast evaluation. The above two changes have been also pulled into the x86/apic branch for implementing the conditional IPI broadcast feature. - Cache the number of online CPUs instead of reevaluating it over and over. num_online_cpus() is an unreliable snapshot anyway except when it is used outside a cpu hotplug locked region. The cached access is not changing this, but it's definitely faster than calculating the bitmap wheight especially in hot paths" * 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: cpu/hotplug: Cache number of online CPUs cpumask: Implement cpumask_or_equal() smp/hotplug: Track booted once CPUs in a cpumask
| * | | | cpu/hotplug: Cache number of online CPUsThomas Gleixner2019-07-251-9/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Re-evaluating the bitmap wheight of the online cpus bitmap in every invocation of num_online_cpus() over and over is a pretty useless exercise. Especially when num_online_cpus() is used in code paths like the IPI delivery of x86 or the membarrier code. Cache the number of online CPUs in the core and just return the cached variable. The accessor function provides only a snapshot when used without protection against concurrent CPU hotplug. The storage needs to use an atomic_t because the kexec and reboot code (ab)use set_cpu_online() in their 'shutdown' handlers without any form of serialization as pointed out by Mathieu. Regular CPU hotplug usage is properly serialized. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1907091622590.1634@nanos.tec.linutronix.de
| * | | | cpumask: Implement cpumask_or_equal()Thomas Gleixner2019-07-252-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IPI code of x86 needs to evaluate whether the target cpumask is equal to the cpu_online_mask or equal except for the calling CPU. To replace the current implementation which requires the usage of a temporary cpumask, which might involve allocations, add a new function which compares a cpumask to the result of two other cpumasks which are or'ed together before comparison. This allows to make the required decision in one go and the calling code then can check for the calling CPU being set in the target mask with cpumask_test_cpu(). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190722105220.585449120@linutronix.de
| * | | | smp/hotplug: Track booted once CPUs in a cpumaskThomas Gleixner2019-07-251-0/+2
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The booted once information which is required to deal with the MCE broadcast issue on X86 correctly is stored in the per cpu hotplug state, which is perfectly fine for the intended purpose. X86 needs that information for supporting NMI broadcasting via shortcuts, but retrieving it from per cpu data is cumbersome. Move it to a cpumask so the information can be checked against the cpu_present_mask quickly. No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190722105219.818822855@linutronix.de
* | | | Merge tag 'platform-drivers-x86-v5.4-1' of ↵Linus Torvalds2019-09-171-2/+6
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.infradead.org/linux-platform-drivers-x86 Pull x86 platform-drivers updates from Andy Shevchenko: - ASUS WMI driver got a couple of updates, i.e. support of FAN is fixed for recent products and the charge threshold support has been added - Two uknown key events for Dell laptops are being ignored now to avoid spamming users with harmless messages - HP ZBook 17 G5 and ASUS Zenbook UX430UNR got accelerometer support. - Intel CherryTrail platforms had a regression with wake up. Now it's fixed - Intel PMC driver got fixed in order to work nicely in Xen environment - Intel Speed Select driver provides bucket vs core count relationship. Besides that the tools has been updated for better output - The PrivacyGuard is enabled on Lenovo ThinkPad laptops - Three tablets - Trekstor Primebook C11B 2-in-1, Irbis TW90 and Chuwi Surbook Mini - got touchscreen support * tag 'platform-drivers-x86-v5.4-1' of git://git.infradead.org/linux-platform-drivers-x86: (53 commits) MAINTAINERS: Switch PDx86 subsystem status to Odd Fixes platform/x86: asus-wmi: Refactor charge threshold to use the battery hooking API platform/x86: asus-wmi: Rename CHARGE_THRESHOLD to RSOC platform/x86: asus-wmi: Reorder ASUS_WMI_CHARGE_THRESHOLD tools/power/x86/intel-speed-select: Display core count for bucket platform/x86: ISST: Allow additional TRL MSRs tools/power/x86/intel-speed-select: Fix memory leak tools/power/x86/intel-speed-select: Output success/failed for command output tools/power/x86/intel-speed-select: Output human readable CPU list tools/power/x86/intel-speed-select: Change turbo ratio output to maximum turbo frequency tools/power/x86/intel-speed-select: Switch output to MHz tools/power/x86/intel-speed-select: Simplify output for turbo-freq and base-freq tools/power/x86/intel-speed-select: Fix cpu-count output tools/power/x86/intel-speed-select: Fix help option typo tools/power/x86/intel-speed-select: Fix package typo tools/power/x86/intel-speed-select: Fix a read overflow in isst_set_tdp_level_msr() platform/x86: intel_int0002_vgpio: Use device_init_wakeup platform/x86: intel_int0002_vgpio: Fix wakeups not working on Cherry Trail platform/x86: compal-laptop: Initialize "value" in ec_read_u8() platform/x86: touchscreen_dmi: Add info for the Trekstor Primebook C11B 2-in-1 ...
| * | | | platform/x86: asus-wmi: Rename CHARGE_THRESHOLD to RSOCKristian Klausen2019-09-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The device is officially called "Relative state of charge" (RSOC). At the same time add the missing DEVID from the name. Signed-off-by: Kristian Klausen <kristian@klausen.dk> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
| * | | | platform/x86: asus-wmi: Reorder ASUS_WMI_CHARGE_THRESHOLDKristian Klausen2019-09-091-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At the same time add a comment explaining what it is used for. Signed-off-by: Kristian Klausen <kristian@klausen.dk> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
| * | | | platform/x86: asus-wmi: Add support for charge thresholdKristian Klausen2019-08-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most newer ASUS laptops supports limiting the battery charge level, which help prolonging the battery life. Tested on a Zenbook UX430UNR. Signed-off-by: Kristian Klausen <kristian@klausen.dk> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
| * | | | platform/x86: asus-wmi: fix CPU fan control on recent productsDaniel Drake2019-07-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, asus-wmi was using the AGFN interface and FAN_CTRL device for CPU fan control. However, this code has been found to be not fully working on some recent products, and having checked the spec, these interfaces are marked as being removed from future products currently in development. The replacement appears to be the CPU_FAN device, added in spec version 8.3 (March 2014) and present on many modern Asus laptops. Add support for this device, and use it whenever it is detected. The older approach based on AGFN and FAN_CTRL is used as a fallback on products that do not have such device. Other than switching between automatic and full speed, there is no fan speed control through this new interface. Signed-off-by: Daniel Drake <drake@endlessm.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
| * | | | platform/x86: asus-wmi: cleanup AGFN fan handlingDaniel Drake2019-07-301-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The asus-wmi driver currently uses the "AGFN" interface and the FAN_CTRL device for fan control. According to the spec, this interface is very dated and marked as pending removal from products currently in development. Clean up the way that the AGFN fan is detected and handled, also preparing the driver for the introduction of an alternate fan control method needed to support recent Asus products. Not anticipating further development of this interface, simplify the code by dropping any notion of being able to control multiple AGFN fans (this was already limited to just a single fan through only exposing a single fan in sysfs). Check for the presence of AGFN fans at probe time, simplifying the code flow in asus_hwmon_sysfs_is_visible(). Signed-off-by: Daniel Drake <drake@endlessm.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
* | | | | Merge branch 'x86-cpu-for-linus' of ↵Linus Torvalds2019-09-171-1/+1
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpu-feature updates from Ingo Molnar: - Rework the Intel model names symbols/macros, which were decades of ad-hoc extensions and added random noise. It's now a coherent, easy to follow nomenclature. - Add new Intel CPU model IDs: - "Tiger Lake" desktop and mobile models - "Elkhart Lake" model ID - and the "Lightning Mountain" variant of Airmont, plus support code - Add the new AVX512_VP2INTERSECT instruction to cpufeatures - Remove Intel MPX user-visible APIs and the self-tests, because the toolchain (gcc) is not supporting it going forward. This is the first, lowest-risk phase of MPX removal. - Remove X86_FEATURE_MFENCE_RDTSC - Various smaller cleanups and fixes * 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits) x86/cpu: Update init data for new Airmont CPU model x86/cpu: Add new Airmont variant to Intel family x86/cpu: Add Elkhart Lake to Intel family x86/cpu: Add Tiger Lake to Intel family x86: Correct misc typos x86/intel: Add common OPTDIFFs x86/intel: Aggregate microserver naming x86/intel: Aggregate big core graphics naming x86/intel: Aggregate big core mobile naming x86/intel: Aggregate big core client naming x86/cpufeature: Explain the macro duplication x86/ftrace: Remove mcount() declaration x86/PCI: Remove superfluous returns from void functions x86/msr-index: Move AMD MSRs where they belong x86/cpu: Use constant definitions for CPU models lib: Remove redundant ftrace flag removal x86/crash: Remove unnecessary comparison x86/bitops: Use __builtin_constant_p() directly instead of IS_IMMEDIATE() x86: Remove X86_FEATURE_MFENCE_RDTSC x86/mpx: Remove MPX APIs ...