diff options
author | Elliot Berman <quic_eberman@quicinc.com> | 2023-09-09 00:49:15 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2023-09-18 08:13:57 +0200 |
commit | fbaa6a181a4b1886cbf4214abdf9a2df68471510 (patch) | |
tree | ceced1ad476ddaa0aa756e3ef8bc36f9a62942d4 /include | |
parent | sched/core: Use do-while instead of for loop in set_nr_if_polling() (diff) | |
download | linux-fbaa6a181a4b1886cbf4214abdf9a2df68471510.tar.xz linux-fbaa6a181a4b1886cbf4214abdf9a2df68471510.zip |
sched/core: Remove ifdeffery for saved_state
In preparation for freezer to also use saved_state, remove the
CONFIG_PREEMPT_RT compilation guard around saved_state.
On the arm64 platform I tested which did not have CONFIG_PREEMPT_RT,
there was no statistically significant deviation by applying this patch.
Test methodology:
perf bench sched message -g 40 -l 40
Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/sched.h | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 77f01ac385f7..dc37ae787e33 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -750,10 +750,8 @@ struct task_struct { #endif unsigned int __state; -#ifdef CONFIG_PREEMPT_RT /* saved state for "spinlock sleepers" */ unsigned int saved_state; -#endif /* * This begins the randomizable portion of task_struct. Only |