diff options
author | Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> | 2019-03-15 15:51:19 +0100 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2019-03-16 01:28:22 +0100 |
commit | 86be36f6502c52ddb4b85938145324fd07332da1 (patch) | |
tree | 86d05f98d7b7a178fc0dfe26e822c3342cfc65cc /arch/m68k/include/asm/unistd.h | |
parent | xsk: fix umem memory leak on cleanup (diff) | |
download | linux-86be36f6502c52ddb4b85938145324fd07332da1.tar.xz linux-86be36f6502c52ddb4b85938145324fd07332da1.zip |
powerpc: bpf: Fix generation of load/store DW instructions
Yauheni Kaliuta pointed out that PTR_TO_STACK store/load verifier test
was failing on powerpc64 BE, and rightfully indicated that the PPC_LD()
macro is not masking away the last two bits of the offset per the ISA,
resulting in the generation of 'lwa' instruction instead of the intended
'ld' instruction.
Segher also pointed out that we can't simply mask away the last two bits
as that will result in loading/storing from/to a memory location that
was not intended.
This patch addresses this by using ldx/stdx if the offset is not
word-aligned. We load the offset into a temporary register (TMP_REG_2)
and use that as the index register in a subsequent ldx/stdx. We fix
PPC_LD() macro to mask off the last two bits, but enhance PPC_BPF_LL()
and PPC_BPF_STL() to factor in the offset value and generate the proper
instruction sequence. We also convert all existing users of PPC_LD() and
PPC_STD() to use these macros. All existing uses of these macros have
been audited to ensure that TMP_REG_2 can be clobbered.
Fixes: 156d0e290e96 ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
Cc: stable@vger.kernel.org # v4.9+
Reported-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'arch/m68k/include/asm/unistd.h')
0 files changed, 0 insertions, 0 deletions