diff options
author | Tvrtko Ursulin <tvrtko.ursulin@igalia.com> | 2024-08-13 12:25:05 +0200 |
---|---|---|
committer | Maíra Canal <mcanal@igalia.com> | 2024-09-16 14:22:27 +0200 |
commit | 06c3c406850e5495bb56ccf624d0c9477e1ba901 (patch) | |
tree | c85eae71e1d8a515edc52d891ee80dfd1bb66d87 | |
parent | drm/panel: raydium-rm69380: transition to mipi_dsi wrapped functions (diff) | |
download | linux-06c3c406850e5495bb56ccf624d0c9477e1ba901.tar.xz linux-06c3c406850e5495bb56ccf624d0c9477e1ba901.zip |
drm/v3d: Appease lockdep while updating GPU stats
Lockdep thinks our seqcount_t usage is unsafe because the update path can
be both from irq and worker context:
[ ] ================================
[ ] WARNING: inconsistent lock state
[ ] 6.10.3-v8-16k-numa #159 Tainted: G WC
[ ] --------------------------------
[ ] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
[ ] swapper/0/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
[ ] ffff80003d7c08d0 (&v3d_priv->stats[i].lock){?.+.}-{0:0}, at: v3d_irq+0xc8/0x660 [v3d]
[ ] {HARDIRQ-ON-W} state was registered at:
[ ] lock_acquire+0x1f8/0x328
[ ] v3d_job_start_stats.isra.0+0xd8/0x218 [v3d]
[ ] v3d_bin_job_run+0x23c/0x388 [v3d]
[ ] drm_sched_run_job_work+0x520/0x6d0 [gpu_sched]
[ ] process_one_work+0x62c/0xb48
[ ] worker_thread+0x468/0x5b0
[ ] kthread+0x1c4/0x1e0
[ ] ret_from_fork+0x10/0x20
[ ] irq event stamp: 337094
[ ] hardirqs last enabled at (337093): [<ffffc0008144ce7c>] default_idle_call+0x11c/0x140
[ ] hardirqs last disabled at (337094): [<ffffc0008144a354>] el1_interrupt+0x24/0x58
[ ] softirqs last enabled at (337082): [<ffffc00080061d90>] handle_softirqs+0x4e0/0x538
[ ] softirqs last disabled at (337073): [<ffffc00080010364>] __do_softirq+0x1c/0x28
[ ]
other info that might help us debug this:
[ ] Possible unsafe locking scenario:
[ ] CPU0
[ ] ----
[ ] lock(&v3d_priv->stats[i].lock);
[ ] <Interrupt>
[ ] lock(&v3d_priv->stats[i].lock);
[ ]
*** DEADLOCK ***
[ ] no locks held by swapper/0/0.
[ ]
stack backtrace:
[ ] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G WC 6.10.3-v8-16k-numa #159
[ ] Hardware name: Raspberry Pi 5 Model B Rev 1.0 (DT)
[ ] Call trace:
[ ] dump_backtrace+0x170/0x1b8
[ ] show_stack+0x20/0x38
[ ] dump_stack_lvl+0xb4/0xd0
[ ] dump_stack+0x18/0x28
[ ] print_usage_bug+0x3cc/0x3f0
[ ] mark_lock+0x4d0/0x968
[ ] __lock_acquire+0x784/0x18c8
[ ] lock_acquire+0x1f8/0x328
[ ] v3d_job_update_stats+0xec/0x2e0 [v3d]
[ ] v3d_irq+0xc8/0x660 [v3d]
[ ] __handle_irq_event_percpu+0x1f8/0x488
[ ] handle_irq_event+0x88/0x128
[ ] handle_fasteoi_irq+0x298/0x408
[ ] generic_handle_domain_irq+0x50/0x78
But it is a false positive because all the queue-stats pairs have their
own lock and jobs are also one at a time.
Nevertheless we can appease lockdep by disabling local interrupts to make
it see lock usage is consistent.
Cc: Maíra Canal <mcanal@igalia.com>
Fixes: 6abe93b621ab ("drm/v3d: Fix race-condition between sysfs/fdinfo and interrupt handler")
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Signed-off-by: Maíra Canal <mcanal@igalia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240813102505.80512-2-tursulin@igalia.com
-rw-r--r-- | drivers/gpu/drm/v3d/v3d_sched.c | 46 |
1 files changed, 41 insertions, 5 deletions
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index f58c38d7de29..99ac4995b5a1 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -135,8 +135,31 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) struct v3d_stats *global_stats = &v3d->queue[queue].stats; struct v3d_stats *local_stats = &file->stats[queue]; u64 now = local_clock(); - - preempt_disable(); + unsigned long flags; + + /* + * We only need to disable local interrupts to appease lockdep who + * otherwise would think v3d_job_start_stats vs v3d_stats_update has an + * unsafe in-irq vs no-irq-off usage problem. This is a false positive + * because all the locks are per queue and stats type, and all jobs are + * completely one at a time serialised. More specifically: + * + * 1. Locks for GPU queues are updated from interrupt handlers under a + * spin lock and started here with preemption disabled. + * + * 2. Locks for CPU queues are updated from the worker with preemption + * disabled and equally started here with preemption disabled. + * + * Therefore both are consistent. + * + * 3. Because next job can only be queued after the previous one has + * been signaled, and locks are per queue, there is also no scope for + * the start part to race with the update part. + */ + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_save(flags); + else + preempt_disable(); write_seqcount_begin(&local_stats->lock); local_stats->start_ns = now; @@ -146,7 +169,10 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue) global_stats->start_ns = now; write_seqcount_end(&global_stats->lock); - preempt_enable(); + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_restore(flags); + else + preempt_enable(); } static void @@ -167,11 +193,21 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue) struct v3d_stats *global_stats = &v3d->queue[queue].stats; struct v3d_stats *local_stats = &file->stats[queue]; u64 now = local_clock(); + unsigned long flags; + + /* See comment in v3d_job_start_stats() */ + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_save(flags); + else + preempt_disable(); - preempt_disable(); v3d_stats_update(local_stats, now); v3d_stats_update(global_stats, now); - preempt_enable(); + + if (IS_ENABLED(CONFIG_LOCKDEP)) + local_irq_restore(flags); + else + preempt_enable(); } static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job) |