diff options
author | Nirmoy Das <nirmoy.aiemd@gmail.com> | 2020-06-25 14:07:23 +0200 |
---|---|---|
committer | Christian König <christian.koenig@amd.com> | 2020-06-26 14:16:29 +0200 |
commit | d41a39dda1407ff50c7c4bfbd307c188f1ae364b (patch) | |
tree | f668464bee4abac9f7f2f5ff445b80ad499acda0 /drivers/gpu/drm/scheduler/sched_entity.c | |
parent | drm/nouveau: don't use ttm bo->offset v3 (diff) | |
download | linux-d41a39dda1407ff50c7c4bfbd307c188f1ae364b.tar.xz linux-d41a39dda1407ff50c7c4bfbd307c188f1ae364b.zip |
drm/scheduler: improve job distribution with multiple queues
This patch uses score to select a new drm scheduler for better
loadbalance between multiple drm schedulers instead of num_jobs.
Below are test results after running amdgpu_test for ~10 times.
Before this patch:
sched_name num of many times it got schedule
========= ==================================
sdma0 1463
sdma1 198
comp_1.0.1 280
After this patch:
sched_name num of many times it got schedule
========= ==================================
sdma0 925
sdma1 928
comp_1.0.1 177
comp_1.1.1 44
comp_1.2.1 43
comp_1.3.1 44
Signed-off-by: Nirmoy Das <nirmoy.das@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/373000/
Signed-off-by: Christian König <christian.koenig@amd.com>
Diffstat (limited to 'drivers/gpu/drm/scheduler/sched_entity.c')
-rw-r--r-- | drivers/gpu/drm/scheduler/sched_entity.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index c803e14eed91..146380118962 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -486,7 +486,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, bool first; trace_drm_sched_job(sched_job, entity); - atomic_inc(&entity->rq->sched->num_jobs); + atomic_inc(&entity->rq->sched->score); WRITE_ONCE(entity->last_user, current->group_leader); first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); |