diff options
author | Peter Zijlstra <peterz@infradead.org> | 2021-03-03 16:45:41 +0100 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-05-12 11:43:27 +0200 |
commit | 9ef7e7e33bcdb57be1afb28884053c28b5f05240 (patch) | |
tree | 40e43fa4c6d82adf7cd39fbc1f5dfb868701165b /crypto/cast5_generic.c | |
parent | sched: Core-wide rq->lock (diff) | |
download | linux-9ef7e7e33bcdb57be1afb28884053c28b5f05240.tar.xz linux-9ef7e7e33bcdb57be1afb28884053c28b5f05240.zip |
sched: Optimize rq_lockp() usage
rq_lockp() includes a static_branch(), which is asm-goto, which is
asm volatile which defeats regular CSE. This means that:
if (!static_branch(&foo))
return simple;
if (static_branch(&foo) && cond)
return complex;
Doesn't fold and we get horrible code. Introduce __rq_lockp() without
the static_branch() on.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Don Hiatt <dhiatt@digitalocean.com>
Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20210422123308.316696988@infradead.org
Diffstat (limited to 'crypto/cast5_generic.c')
0 files changed, 0 insertions, 0 deletions