diff options
author | Frederic Weisbecker <frederic@kernel.org> | 2020-12-02 12:57:31 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2020-12-02 20:20:05 +0100 |
commit | d3759e7184f8f6187e62f8c4e7dcb1f6c47c075a (patch) | |
tree | 0c83a96f3b76385bcda96eb4eadf82377dfc28e7 /kernel/softirq.c | |
parent | sched/vtime: Consolidate IRQ time accounting (diff) | |
download | linux-d3759e7184f8f6187e62f8c4e7dcb1f6c47c075a.tar.xz linux-d3759e7184f8f6187e62f8c4e7dcb1f6c47c075a.zip |
irqtime: Move irqtime entry accounting after irq offset incrementation
IRQ time entry is currently accounted before HARDIRQ_OFFSET or
SOFTIRQ_OFFSET are incremented. This is convenient to decide to which
index the cputime to account is dispatched.
Unfortunately it prevents tick_irq_enter() from being called under
HARDIRQ_OFFSET because tick_irq_enter() has to be called before the IRQ
entry accounting due to the necessary clock catch up. As a result we
don't benefit from appropriate lockdep coverage on tick_irq_enter().
To prepare for fixing this, move the IRQ entry cputime accounting after
the preempt offset is incremented. This requires the cputime dispatch
code to handle the extra offset.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20201202115732.27827-5-frederic@kernel.org
Diffstat (limited to 'kernel/softirq.c')
-rw-r--r-- | kernel/softirq.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/softirq.c b/kernel/softirq.c index 617009ccd82c..b8f42b3ba8ca 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -315,10 +315,10 @@ asmlinkage __visible void __softirq_entry __do_softirq(void) current->flags &= ~PF_MEMALLOC; pending = local_softirq_pending(); - account_irq_enter_time(current); __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET); in_hardirq = lockdep_softirq_start(); + account_softirq_enter(current); restart: /* Reset the pending bitmask before enabling irqs */ @@ -365,8 +365,8 @@ restart: wakeup_softirqd(); } + account_softirq_exit(current); lockdep_softirq_end(in_hardirq); - account_irq_exit_time(current); __local_bh_enable(SOFTIRQ_OFFSET); WARN_ON_ONCE(in_interrupt()); current_restore_flags(old_flags, PF_MEMALLOC); @@ -418,7 +418,7 @@ static inline void __irq_exit_rcu(void) #else lockdep_assert_irqs_disabled(); #endif - account_irq_exit_time(current); + account_hardirq_exit(current); preempt_count_sub(HARDIRQ_OFFSET); if (!in_interrupt() && local_softirq_pending()) invoke_softirq(); |