diff options
author | Yang Jihong <yangjihong1@huawei.com> | 2023-02-21 00:49:16 +0100 |
---|---|---|
committer | Masami Hiramatsu (Google) <mhiramat@kernel.org> | 2023-02-21 00:49:16 +0100 |
commit | 868a6fc0ca2407622d2833adefe1c4d284766c4c (patch) | |
tree | 0252e31d0dd357bf8a560f0c7e72698957256c8a /kernel/kprobes.c | |
parent | kprobes: Fix to handle forcibly unoptimized kprobes on freeing_list (diff) | |
download | linux-868a6fc0ca2407622d2833adefe1c4d284766c4c.tar.xz linux-868a6fc0ca2407622d2833adefe1c4d284766c4c.zip |
x86/kprobes: Fix __recover_optprobed_insn check optimizing logic
Since the following commit:
commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
may be in the optimizing or unoptimizing state when op.kp->flags
has KPROBE_FLAG_OPTIMIZED and op->list is not empty.
The __recover_optprobed_insn check logic is incorrect, a kprobe in the
unoptimizing state may be incorrectly determined as unoptimizing.
As a result, incorrect instructions are copied.
The optprobe_queued_unopt function needs to be exported for invoking in
arch directory.
Link: https://lore.kernel.org/all/20230216034247.32348-2-yangjihong1@huawei.com/
Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
Cc: stable@vger.kernel.org
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Diffstat (limited to 'kernel/kprobes.c')
-rw-r--r-- | kernel/kprobes.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 6b6aff00b3b6..55e1807ca054 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -660,7 +660,7 @@ void wait_for_kprobe_optimizer(void) mutex_unlock(&kprobe_mutex); } -static bool optprobe_queued_unopt(struct optimized_kprobe *op) +bool optprobe_queued_unopt(struct optimized_kprobe *op) { struct optimized_kprobe *_op; |