diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-03-24 01:35:57 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-03-24 01:35:57 +0100 |
commit | 9c0e6a89b592f4c4e4d769dbc22d399ab0685159 (patch) | |
tree | 54865d08ede844e868b3403670a9a91ad24bba82 /arch/arm/include/asm/cacheflush.h | |
parent | Merge tag 'm68knommu-for-v5.18' of git://git.kernel.org/pub/scm/linux/kernel/... (diff) | |
parent | ARM: fix building NOMMU ARMv4/v5 kernels (diff) | |
download | linux-9c0e6a89b592f4c4e4d769dbc22d399ab0685159.tar.xz linux-9c0e6a89b592f4c4e4d769dbc22d399ab0685159.zip |
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
"Updates for IRQ stacks and virtually mapped stack support, and ftrace:
- Support for IRQ and vmap'ed stacks
This covers all the work related to implementing IRQ stacks and
vmap'ed stacks for all 32-bit ARM systems that are currently
supported by the Linux kernel, including RiscPC and Footbridge. It
has been submitted for review in four different waves:
- IRQ stacks support for v7 SMP systems [0]
- vmap'ed stacks support for v7 SMP systems[1]
- extending support for both IRQ stacks and vmap'ed stacks for all
remaining configurations, including v6/v7 SMP multiplatform
kernels and uniprocessor configurations including v7-M [2]
- fixes and updates in [3]
- ftrace fixes and cleanups
Make all flavors of ftrace available on all builds, regardless of
ISA choice, unwinder choice or compiler [4]:
- use ADD not POP where possible
- fix a couple of Thumb2 related issues
- enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness
- enable the graph tracer with the EABI unwinder
- avoid clobbering frame pointer registers to make Clang happy
- Fixes for the above"
[0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/
[1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/
[3] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/
[4] https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (62 commits)
ARM: fix building NOMMU ARMv4/v5 kernels
ARM: unwind: only permit stack switch when unwinding call_with_stack()
ARM: Revert "unwind: dump exception stack from calling frame"
ARM: entry: fix unwinder problems caused by IRQ stacks
ARM: unwind: set frame.pc correctly for current-thread unwinding
ARM: 9184/1: return_address: disable again for CONFIG_ARM_UNWIND=y
ARM: 9183/1: unwind: avoid spurious warnings on bogus code addresses
Revert "ARM: 9144/1: forbid ftrace with clang and thumb2_kernel"
ARM: mach-bcm: disable ftrace in SMC invocation routines
ARM: cacheflush: avoid clobbering the frame pointer
ARM: kprobes: treat R7 as the frame pointer register in Thumb2 builds
ARM: ftrace: enable the graph tracer with the EABI unwinder
ARM: unwind: track location of LR value in stack frame
ARM: ftrace: enable HAVE_FUNCTION_GRAPH_FP_TEST
ARM: ftrace: avoid unnecessary literal loads
ARM: ftrace: avoid redundant loads or clobbering IP
ARM: ftrace: use trampolines to keep .init.text in branching range
ARM: ftrace: use ADD not POP to counter PUSH at entry
ARM: ftrace: ensure that ADR takes the Thumb bit into account
ARM: make get_current() and __my_cpu_offset() __always_inline
...
Diffstat (limited to 'arch/arm/include/asm/cacheflush.h')
-rw-r--r-- | arch/arm/include/asm/cacheflush.h | 12 |
1 files changed, 3 insertions, 9 deletions
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 5e56288e343b..a094f964c869 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -445,15 +445,10 @@ static inline void __sync_cache_range_r(volatile void *p, size_t size) * however some exceptions may exist. Caveat emptor. * * - The clobber list is dictated by the call to v7_flush_dcache_*. - * fp is preserved to the stack explicitly prior disabling the cache - * since adding it to the clobber list is incompatible with having - * CONFIG_FRAME_POINTER=y. ip is saved as well if ever r12-clobbering - * trampoline are inserted by the linker and to keep sp 64-bit aligned. */ #define v7_exit_coherency_flush(level) \ asm volatile( \ ".arch armv7-a \n\t" \ - "stmfd sp!, {fp, ip} \n\t" \ "mrc p15, 0, r0, c1, c0, 0 @ get SCTLR \n\t" \ "bic r0, r0, #"__stringify(CR_C)" \n\t" \ "mcr p15, 0, r0, c1, c0, 0 @ set SCTLR \n\t" \ @@ -463,10 +458,9 @@ static inline void __sync_cache_range_r(volatile void *p, size_t size) "bic r0, r0, #(1 << 6) @ disable local coherency \n\t" \ "mcr p15, 0, r0, c1, c0, 1 @ set ACTLR \n\t" \ "isb \n\t" \ - "dsb \n\t" \ - "ldmfd sp!, {fp, ip}" \ - : : : "r0","r1","r2","r3","r4","r5","r6","r7", \ - "r9","r10","lr","memory" ) + "dsb" \ + : : : "r0","r1","r2","r3","r4","r5","r6", \ + "r9","r10","ip","lr","memory" ) void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, void *kaddr, unsigned long len); |