summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/entry_32.S (follow)
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/32s: get rid of CPU_FTR_601 featureChristophe Leroy2019-08-281-6/+16
| | | | | | | | | | | Now that 601 is exclusive from other 6xx, CPU_FTR_601 and associated fixups are useless. Drop this feature and use #ifdefs instead. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ecdb7194a17dbfa01865df6a82979533adc2c70b.1566834712.git.christophe.leroy@c-s.fr
* powerpc/32: replace LOAD_MSR_KERNEL() by LOAD_REG_IMMEDIATE()Christophe Leroy2019-08-271-9/+9
| | | | | | | | | LOAD_MSR_KERNEL() and LOAD_REG_IMMEDIATE() are doing the same thing in the same way. Drop LOAD_MSR_KERNEL() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8f04a6df0bc8949517fd8236d50c15008ccf9231.1566311636.git.christophe.leroy@c-s.fr
* powerpc: Wire up clone3 syscallMichael Ellerman2019-07-291-0/+8
| | | | | | | | | | | | | | | | Wire up the new clone3 syscall added in commit 7f192e3cd316 ("fork: add clone3"). This requires a ppc_clone3 wrapper, in order to save the non-volatile GPRs before calling into the generic syscall code. Otherwise we hit the BUG_ON in CHECK_FULL_REGS in copy_thread(). Lightly tested using Christian's test code on a Power8 LE VM. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Christian Brauner <christian@brauner.io> Link: https://lore.kernel.org/r/20190724140259.23554-1-mpe@ellerman.id.au
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner2019-05-301-6/+1
| | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* powerpc/entry: Remove unneeded need_resched() loopValentin Schneider2019-05-021-4/+1
| | | | | | | | | | Since the enabling and disabling of IRQs within preempt_schedule_irq() is contained in a need_resched() loop, we don't need the outer arch code loop. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> [mpe: Rebase since CURRENT_THREAD_INFO() removal] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Don't add dummy frames when calling trace_hardirqs_on/offChristophe Leroy2019-05-021-14/+2
| | | | | | | | | | | | | | | No need to add dummy frames when calling trace_hardirqs_on or trace_hardirqs_off. GCC properly handles empty stacks. In addition, powerpc doesn't set CONFIG_FRAME_POINTER, therefore __builtin_return_address(1..) returns NULL at all time. So the dummy frames are definitely unneeded here. In the meantime, avoid reading memory for loading r1 with a value we already know. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: don't do syscall stuff in transfer_to_handlerChristophe Leroy2019-05-021-19/+0
| | | | | | | | As syscalls are now handled via a fast entry path, syscall related actions can be removed from the generic transfer_to_handler path. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: implement fast entry for syscalls on BOOKEChristophe Leroy2019-05-021-7/+0
| | | | | | | | | | | | | | | | | | This patch implements a fast entry for syscalls. Syscalls don't have to preserve non volatile registers except LR. This patch then implement a fast entry for syscalls, where volatile registers get clobbered. As this entry is dedicated to syscall it always sets MSR_EE and warns in case MSR_EE was previously off It also assumes that the call is always from user, system calls are unexpected from kernel. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: implement fast entry for syscalls on non BOOKEChristophe Leroy2019-05-021-0/+32
| | | | | | | | | | | | | | | | | | | | | This patch implements a fast entry for syscalls. Syscalls don't have to preserve non volatile registers except LR. This patch then implement a fast entry for syscalls, where volatile registers get clobbered. As this entry is dedicated to syscall it always sets MSR_EE and warns in case MSR_EE was previously off It also assumes that the call is always from user, system calls are unexpected from kernel. The overall series improves null_syscall selftest by 12,5% on an 83xx and by 17% on a 8xx. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Fix 32-bit handling of MSR_EE on exceptionsChristophe Leroy2019-05-021-49/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [text mostly copied from benh's RFC/WIP] ppc32 are still doing something rather gothic and wrong on 32-bit which we stopped doing on 64-bit a while ago. We have that thing where some handlers "copy" the EE value from the original stack frame into the new MSR before transferring to the handler. Thus for a number of exceptions, we enter the handlers with interrupts enabled. This is rather fishy, some of the stuff that handlers might do early on such as irq_enter/exit or user_exit, context tracking, etc... should be run with interrupts off afaik. Generally our handlers know when to re-enable interrupts if needed. The problem we were having is that we assumed these interrupts would return with interrupts enabled. However that isn't the case. Instead, this patch changes things so that we always enter exception handlers with interrupts *off* with the notable exception of syscalls which are special (and get a fast path). Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: move LOAD_MSR_KERNEL() into head_32.h and use itChristophe Leroy2019-05-021-8/+1
| | | | | | | | | | | | | As preparation for using head_32.h for head_40x.S, move LOAD_MSR_KERNEL() there and use it to load r10 with MSR_KERNEL value. In the mean time, this patch modifies it so that it takes into account the size of the passed value to determine if 'li' can be used or if 'lis/ori' is needed instead of using the size of MSR_KERNEL. This is done by using gas macro. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: Implement Kernel Userspace Execution Prevention.Christophe Leroy2019-04-211-0/+9
| | | | | | | | | | | | To implement Kernel Userspace Execution Prevention, this patch sets NX bit on all user segments on kernel entry and clears NX bit on all user segments on kernel exit. Note that powerpc 601 doesn't have the NX bit, so KUEP will not work on it. A warning is displayed at startup. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Prepare for Kernel Userspace Access ProtectionChristophe Leroy2019-04-211-4/+12
| | | | | | | | | | | | This patch adds ASM macros for saving, restoring and checking the KUAP state, and modifies setup_32 to call them on exceptions from kernel. The macros are defined as empty by default for when CONFIG_PPC_KUAP is not selected and/or for platforms which don't handle (yet) KUAP. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Remove MSR_PR test when returning from syscallChristophe Leroy2019-04-211-5/+0
| | | | | | | | syscalls are from user only, so we can account time without checking whether returning to kernel or user as it will only be user. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Clear on-stack exception marker upon exception returnChristophe Leroy2019-03-031-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clear the on-stack STACK_FRAME_REGS_MARKER on exception exit in order to avoid confusing stacktrace like the one below. Call Trace: [c0e9dca0] [c01c42a0] print_address_description+0x64/0x2bc (unreliable) [c0e9dcd0] [c01c4684] kasan_report+0xfc/0x180 [c0e9dd10] [c0895130] memchr+0x24/0x74 [c0e9dd30] [c00a9e38] msg_print_text+0x124/0x574 [c0e9dde0] [c00ab710] console_unlock+0x114/0x4f8 [c0e9de40] [c00adc60] vprintk_emit+0x188/0x1c4 --- interrupt: c0e9df00 at 0x400f330 LR = init_stack+0x1f00/0x2000 [c0e9de80] [c00ae3c4] printk+0xa8/0xcc (unreliable) [c0e9df20] [c0c27e44] early_irq_init+0x38/0x108 [c0e9df50] [c0c15434] start_kernel+0x310/0x488 [c0e9dff0] [00003484] 0x3484 With this patch the trace becomes: Call Trace: [c0e9dca0] [c01c42c0] print_address_description+0x64/0x2bc (unreliable) [c0e9dcd0] [c01c46a4] kasan_report+0xfc/0x180 [c0e9dd10] [c0895150] memchr+0x24/0x74 [c0e9dd30] [c00a9e58] msg_print_text+0x124/0x574 [c0e9dde0] [c00ab730] console_unlock+0x114/0x4f8 [c0e9de40] [c00adc80] vprintk_emit+0x188/0x1c4 [c0e9de80] [c00ae3e4] printk+0xa8/0xcc [c0e9df20] [c0c27e44] early_irq_init+0x38/0x108 [c0e9df50] [c0c15434] start_kernel+0x310/0x488 [c0e9dff0] [00003484] 0x3484 Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Remove CURRENT_THREAD_INFO and rename TI_CPUChristophe Leroy2019-02-231-37/+18
| | | | | | | | | | | | | | | | | | | | | Now that thread_info is similar to task_struct, its address is in r2 so CURRENT_THREAD_INFO() macro is useless. This patch removes it. This patch also moves the 'tovirt(r2, r2)' down just before the reactivation of MMU translation, so that we keep the physical address of 'current' in r2 until then. It avoids a few calls to tophys(). At the same time, as the 'cpu' field is not anymore in thread_info, TI_CPU is renamed TASK_CPU by this patch. It also allows to get rid of a couple of '#ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE' as ACCOUNT_CPU_USER_ENTRY() and ACCOUNT_CPU_USER_EXIT() are empty when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is not defined. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Fix a missed conversion of TI_CPU idle_6xx.S] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: regain entire stack spaceChristophe Leroy2019-02-231-10/+4
| | | | | | | | | | | | | | | | | | | thread_info is not anymore in the stack, so the entire stack can now be used. There is also no risk anymore of corrupting task_cpu(p) with a stack overflow so the patch removes the test. When doing this, an explicit test for NULL stack pointer is needed in validate_sp() as it is not anymore implicitely covered by the sizeof(thread_info) gap. In the meantime, with the previous patch all pointers to the stacks are not anymore pointers to thread_info so this patch changes them to void* Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Activate CONFIG_THREAD_INFO_IN_TASKChristophe Leroy2019-02-231-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch activates CONFIG_THREAD_INFO_IN_TASK which moves the thread_info into task_struct. Moving thread_info into task_struct has the following advantages: - It protects thread_info from corruption in the case of stack overflows. - Its address is harder to determine if stack addresses are leaked, making a number of attacks more difficult. This has the following consequences: - thread_info is now located at the beginning of task_struct. - The 'cpu' field is now in task_struct, and only exists when CONFIG_SMP is active. - thread_info doesn't have anymore the 'task' field. This patch: - Removes all recopy of thread_info struct when the stack changes. - Changes the CURRENT_THREAD_INFO() macro to point to current. - Selects CONFIG_THREAD_INFO_IN_TASK. - Modifies raw_smp_processor_id() to get ->cpu from current without including linux/sched.h to avoid circular inclusion and without including asm/asm-offsets.h to avoid symbol names duplication between ASM constants and C constants. - Modifies klp_init_thread_info() to take a task_struct pointer argument. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Add task_stack.h to livepatch.h to fix build fails] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Rename THREAD_INFO to TASK_STACKChristophe Leroy2019-02-231-1/+1
| | | | | | | | | | | | | | This patch renames THREAD_INFO to TASK_STACK, because it is in fact the offset of the pointer to the stack in task_struct so this pointer will not be impacted by the move of THREAD_INFO. Also make it available on 64-bit, as we'll need it there when we activate THREAD_INFO_IN_TASK. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Make available on 64-bit] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Fix CONFIG_VIRT_CPU_ACCOUNTING_NATIVE for 40x/bookeChristophe Leroy2019-02-211-5/+7
| | | | | | | | | 40x/booke have another path to reach 3f from transfer_to_handler, make sure it also calls ACCOUNT_CPU_USER_ENTRY() when CONFIG_VIRT_CPU_ACCOUNTING_NATIVE is selected. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/6xx: Don't use SPRN_SPRG2 for storing stack pointer while in RTASChristophe Leroy2019-02-211-2/+3
| | | | | | | | | | | When calling RTAS, the stack pointer is stored in SPRN_SPRG2 in order to be able to restore it in case of machine check in RTAS. As machine check is not a perfomance critical path, this patch frees SPRN_SPRG2 by using a field in thread struct instead. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Remove unneccessary MSR[RI] clearing for 8xxChristophe Leroy2019-02-211-3/+0
| | | | | | | MSR[RI] has already been cleared a few lines above. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: change CONFIG_6xx to CONFIG_PPC_BOOK3S_32Christophe Leroy2018-11-261-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | Today we have: config PPC_BOOK3S_32 bool "512x/52xx/6xx/7xx/74xx/82xx/83xx/86xx" [depends on PPC32 within a choice] config PPC_BOOK3S def_bool y depends on PPC_BOOK3S_32 || PPC_BOOK3S_64 config 6xx def_bool y depends on PPC32 && PPC_BOOK3S 6xx is therefore redundant with PPC_BOOK3S_32. In order to make the code clearer, lets use preferably PPC_BOOK3S_32. This will allow to remove CONFIG_6xx in a later patch. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/traps: merge unrecoverable_exception() and nonrecoverable_exception()Christophe Leroy2018-10-031-2/+2
| | | | | | | | | | | | PPC32 uses nonrecoverable_exception() while PPC64 uses unrecoverable_exception(). Both functions are doing almost the same thing. This patch removes nonrecoverable_exception() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/fsl: Sanitize the syscall table for NXP PowerPC 32 bit platformsDiana Craciun2018-08-071-0/+10
| | | | | | | Used barrier_nospec to sanitize the syscall table. Signed-off-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: clean inclusions of asm/feature-fixups.hChristophe Leroy2018-07-301-0/+1
| | | | | | | | files not using feature fixup don't need asm/feature-fixups.h files using feature fixup need asm/feature-fixups.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/405: move PPC405_ERR77 in asm-405.hChristophe Leroy2018-07-301-0/+1
| | | | | Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Add syscall detection for restartable sequencesBoqun Feng2018-06-061-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Syscalls are not allowed inside restartable sequences, so add a call to rseq_syscall() at the very beginning of system call exiting path for CONFIG_DEBUG_RSEQ=y kernel. This could help us to detect whether there is a syscall issued inside restartable sequences. Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Joel Fernandes <joelaf@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Watson <davejwatson@fb.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Chris Lameter <cl@linux.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Andrew Hunter <ahh@google.com> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Paul Turner <pjt@google.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ben Maurer <bmaurer@fb.com> Cc: linux-api@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lkml.kernel.org/r/20180602124408.8430-10-mathieu.desnoyers@efficios.com
* powerpc/8xx: Only perform perf counting when perf is in use.Christophe Leroy2018-01-161-5/+5
| | | | | | | | | | | | | | | | | In TLB miss handlers, updating the perf counter is only useful when performing a perf analysis. As it has a noticeable overhead, let's only do it when needed. In order to do so, the exit of the miss handlers will be patched when starting/stopping 'perf': the first register restore instruction of each exit point will be replaced by a jump to the counting code. Once this is done, CONFIG_PPC_8xx_PERF_EVENT becomes useless as this feature doesn't add any overhead. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Avoid risk of unrecoverable TLBmiss inside entry_32.SChristophe Leroy2017-08-151-0/+7
| | | | | | | | | | | | | | | | | | | | | By default, the 8xx pins an ITLB on the first 8M of memory in order to avoid any ITLB miss on kernel code. However, with some debug functions like DEBUG_PAGEALLOC and DEBUG_RODATA, pinning TLBs is contradictory. In order to avoid any ITLB miss in a critical section without pinning TLBs, we have to ensure that there is no page boundary crossed between the setup of a new value in SRR0/SRR1 and the associated RFI. The functions modifying srr0/srr1 are all located in setup_32.S. They are spread over almost 4kbytes. The patch forces a 12 bits (4kbytes) alignment for those functions. This garanties that the functions remain in a single 4k page. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Fix boot failure on non 6xx platformsChristophe Leroy2017-08-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | Commit d300627c6a536 ("powerpc/6xx: Handle DABR match before calling do_page_fault") breaks non 6xx platforms. Failed to execute /init (error -14) Starting init: /bin/sh exists but couldn't execute it (error -14) Kernel panic - not syncing: No working init found. Try passing init= ... CPU: 0 PID: 1 Comm: init Not tainted 4.13.0-rc3-s3k-dev-00143-g7aa62e972a56 #56 Call Trace: panic+0x108/0x250 (unreliable) rootfs_mount+0x0/0x58 ret_from_kernel_thread+0x5c/0x64 Rebooting in 180 seconds.. This is because in handle_page_fault(), the call to do_page_fault() has been mistakenly enclosed inside an #ifdef CONFIG_6xx Fixes: d300627c6a536 ("powerpc/6xx: Handle DABR match before calling do_page_fault") Brown-paper-bag-to-be-worn-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/6xx: Handle DABR match before calling do_page_faultBenjamin Herrenschmidt2017-08-031-0/+15
| | | | | | | | | | | On legacy 6xx 32-bit procesors, we checked for the DABR match bit in DSISR from do_page_fault(), in the middle of a pile of ifdef's because all other CPU types do it in assembly prior to calling do_page_fault. Fix that. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Add #ifdef CONFIG_6xx] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Split ftrace bits into a separate fileNaveen N. Rao2017-04-271-107/+0
| | | | | | | | | | | | | entry_*.S now includes a lot more than just kernel entry/exit code. As a first step at cleaning this up, let's split out the ftrace bits into separate files. Also move all related tracing code into a new trace/ subdirectory. No functional changes. Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* Merge tag 'powerpc-4.11-2' of ↵Linus Torvalds2017-03-011-4/+15
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull more powerpc updates from Michael Ellerman: "Highlights include: - an update of the disassembly code used by xmon to the latest versions in binutils. We've received permission from all the authors of the relevant binutils changes to relicense their changes to the relevant files from GPLv3 to GPLv2, for inclusion in Linux. Thanks to Peter Bergner for doing the leg work to get permission from everyone. - addition of the "architected" Power9 CPU table entry, allowing us to boot in Power9 architected mode under a hypervisor. - updates to the Power9 PMU code. - implementation of clear_bit_unlock_is_negative_byte() to optimise unlock_page(). - Freescale updates from Scott: "Highlights include 8xx breakpoints and perf, t1042rdb display support, and board updates." Thanks to: Al Viro, Andrew Donnellan, Aneesh Kumar K.V, Balbir Singh, Douglas Miller, Frédéric Weisbecker, Gavin Shan, Madhavan Srinivasan, Michael Roth, Nathan Fontenot, Naveen N. Rao, Nicholas Piggin, Peter Bergner, Paul E. McKenney, Rashmica Gupta, Russell Currey, Sahil Mehta, Stewart Smith" * tag 'powerpc-4.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (48 commits) powerpc: Remove leftover cputime_to_nsecs call causing build error powerpc/mm/hash: Always clear UPRT and Host Radix bits when setting up CPU powerpc/optprobes: Fix TOC handling in optprobes trampoline powerpc/pseries: Advertise Hot Plug Event support to firmware cxl: fix nested locking hang during EEH hotplug powerpc/xmon: Dump memory in CPU endian format powerpc/pseries: Revert 'Auto-online hotplugged memory' powerpc/powernv: Make PCI non-optional powerpc/64: Implement clear_bit_unlock_is_negative_byte() powerpc/powernv: Remove unused variable in pnv_pci_sriov_disable() powerpc/kernel: Remove error message in pcibios_setup_phb_resources() powerpc/mm: Fix typo in set_pte_at() pci/hotplug/pnv-php: Disable MSI and PCI device properly pci/hotplug/pnv-php: Disable surprise hotplug capability on conflicts pci/hotplug/pnv-php: Remove WARN_ON() in pnv_php_put_slot() powerpc: Add POWER9 architected mode to cputable powerpc/perf: use is_kernel_addr macro in perf_get_misc_flags() powerpc/perf: Avoid FAB_*_MATCH checks for power9 powerpc/perf: Add restrictions to PMC5 in power9 DD1 powerpc/perf: Use Instruction Counter value ...
| * powerpc/8xx: Perf events on PPC 8xxChristophe Leroy2017-01-271-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch has been reworked since RFC version. In the RFC, this patch was preceded by a patch clearing MSR RI for all PPC32 at all time at exception prologs. Now MSR RI clearing is done only when this 8xx perf events functionality is compiled in, it is therefore limited to 8xx and merged inside this patch. Other main changes have been to take into account detailed review from Peter Zijlstra. The instructions counter has been reworked to behave as a free running counter like the three other counters. The 8xx has no PMU, however some events can be emulated by other means. This patch implements the following events (as reported by 'perf list'): cpu-cycles OR cycles [Hardware event] instructions [Hardware event] dTLB-load-misses [Hardware cache event] iTLB-load-misses [Hardware cache event] 'cycles' event is implemented using the timebase clock. Timebase clock corresponds to CPU clock divided by 16, so number of cycles is approximatly 16 times the number of TB ticks On the 8xx, TLB misses are handled by software. It is therefore easy to count all TLB misses each time the TLB miss exception is called. 'instructions' is calculated by using instruction watchpoint counter. This patch sets counter A to count instructions at address greater than 0, hence we count all instructions executed while MSR RI bit is set. The counter is set to the maximum which is 0xffff. Every 65535 instructions, debug instruction breakpoint exception fires. The exception handler increments a counter in memory which then represent the upper part of the instruction counter. We therefore end up with a 48 bits counter. In order to avoid unnecessary overhead while no perf event is active, this counter is started when the first event referring to this counter is added, and the counter is stopped when the last event referring to it is deleted. In order to properly support breakpoint exceptions, MSR RI bit has to be unset in exception epilogs in order to avoid breakpoint exceptions during critical sections during changes to SRR0 and SRR1 would be problematic. All counters are handled as free running counters. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
| * powerpc/32: Remove FIX_SRR1Christophe Leroy2017-01-271-4/+0
| | | | | | | | | | | | | | | | | | FIX_SRR1() is defined as blank. Last useful instance of FIX_SRR1() was removed by commit 40ef8cbc6d360 ("powerpc: Get 64-bit configs to compile with ARCH=powerpc") in 2005. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
* | powerpc: Revert the initial stack protector supportMichael Ellerman2017-01-241-5/+1
|/ | | | | | | | | | | | | | | | Unfortunately the stack protector support we merged recently only works on some toolchains. If the toolchain is built without glibc support everything works fine, but if glibc is built then it leads to a panic at boot. The solution is not rc5 material, so revert the support for now. This reverts commits: 6533b7c16ee5 ("powerpc: Initial stack protector (-fstack-protector) support") 902e06eb86cd ("powerpc/32: Change the stack protector canary value per task") Fixes: 6533b7c16ee5 ("powerpc: Initial stack protector (-fstack-protector) support") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Change the stack protector canary value per taskChristophe Leroy2016-11-231-1/+5
| | | | | | | | | | | | | | | | | | | Partially copied from commit df0698be14c66 ("ARM: stack protector: change the canary value per task") A new random value for the canary is stored in the task struct whenever a new task is forked. This is meant to allow for different canary values per task. On powerpc, GCC expects the canary value to be found in a global variable called __stack_chk_guard. So this variable has to be updated with the value stored in the task struct whenever a task switch occurs. Because the variable GCC expects is global, this cannot work on SMP unfortunately. So, on SMP, the same initial canary value is kept throughout, making this feature a bit less effective although it is still useful. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* Merge branch 'kbuild' of ↵Linus Torvalds2016-10-141-0/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild Pull kbuild updates from Michal Marek: - EXPORT_SYMBOL for asm source by Al Viro. This does bring a regression, because genksyms no longer generates checksums for these symbols (CONFIG_MODVERSIONS). Nick Piggin is working on a patch to fix this. Plus, we are talking about functions like strcpy(), which rarely change prototypes. - Fixes for PPC fallout of the above by Stephen Rothwell and Nick Piggin - fixdep speedup by Alexey Dobriyan. - preparatory work by Nick Piggin to allow architectures to build with -ffunction-sections, -fdata-sections and --gc-sections - CONFIG_THIN_ARCHIVES support by Stephen Rothwell - fix for filenames with colons in the initramfs source by me. * 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild: (22 commits) initramfs: Escape colons in depfile ppc: there is no clear_pages to export powerpc/64: whitelist unresolved modversions CRCs kbuild: -ffunction-sections fix for archs with conflicting sections kbuild: add arch specific post-link Makefile kbuild: allow archs to select link dead code/data elimination kbuild: allow architectures to use thin archives instead of ld -r kbuild: Regenerate genksyms lexer kbuild: genksyms fix for typeof handling fixdep: faster CONFIG_ search ia64: move exports to definitions sparc32: debride memcpy.S a bit [sparc] unify 32bit and 64bit string.h sparc: move exports to definitions ppc: move exports to definitions arm: move exports to definitions s390: move exports to definitions m68k: move exports to definitions alpha: move exports to actual definitions x86: move exports to actual definitions ...
| * ppc: move exports to definitionsAl Viro2016-08-081-0/+2
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | powerpc/32: Remove CLR_TOP32Christophe Leroy2016-09-221-1/+0
|/ | | | | | | | | CLR_TOP32() is defined as blank. Last useful instance of CLR_TOP32() was removed by commit 40ef8cbc6d360 ("powerpc: Get 64-bit configs to compile with ARCH=powerpc") in 2005. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc32: provide VIRT_CPU_ACCOUNTINGChristophe Leroy2016-07-091-0/+17
| | | | | | | | | | | | | This patch provides VIRT_CPU_ACCOUTING to PPC32 architecture. PPC32 doesn't have the PACA structure, so we use the task_info structure to store the accounting data. In order to reuse on PPC32 the PPC64 functions, all u64 data has been replaced by 'unsigned long' so that it is u32 on PPC32 and u64 on PPC64 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
* powerpc/kernel: Change the do_syscall_trace_enter() APIMichael Ellerman2015-07-291-0/+4
| | | | | | | | | | | | | | | | | | | | | | The API for calling do_syscall_trace_enter() is currently sensible enough, it just returns the (modified) syscall number. However once we enable seccomp filter it will get more complicated. When seccomp filter runs, the seccomp kernel code (via SECCOMP_RET_ERRNO), or a ptracer (via SECCOMP_RET_TRACE), may reject the syscall and *may* or may *not* set a return value in r3. That means the assembler that calls do_syscall_trace_enter() can not blindly return ENOSYS, it needs to only return ENOSYS if a return value has not already been set. There is no way to implement that logic with the current API. So change the do_syscall_trace_enter() API to make it deal with the return code juggling, and the assembler can then just return whatever return code it is given. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
* powerpc/kernel: Switch to using MAX_ERRNOMichael Ellerman2015-07-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently on powerpc we have our own #define for the highest (negative) errno value, called _LAST_ERRNO. This is defined to be 516, for reasons which are not clear. The generic code, and x86, use MAX_ERRNO, which is defined to be 4095. In particular seccomp uses MAX_ERRNO to restrict the value that a seccomp filter can return. Currently with the mismatch between _LAST_ERRNO and MAX_ERRNO, a seccomp tracer wanting to return 600, expecting it to be seen as an error, would instead find on powerpc that userspace sees a successful syscall with a return value of 600. To avoid this inconsistency, switch powerpc to use MAX_ERRNO. We are somewhat confident that generic syscalls that can return a non-error value above negative MAX_ERRNO have already been updated to use force_successful_syscall_return(). I have also checked all the powerpc specific syscalls, and believe that none of them expect to return a non-error value between -MAX_ERRNO and -516. So this change should be safe ... Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Kees Cook <keescook@chromium.org>
* powerpc: Remove old compile time disabled syscall tracing codeMichael Ellerman2015-02-021-77/+0
| | | | | | | | | | | | | | | | | We have code to do syscall tracing which is disabled at compile time by default. It's not been touched since the dawn of time (ie. v2.6.12). There are now better ways to do syscall tracing, ie. using the raw_syscall, or syscall tracepoints. For the specific case of tracing syscalls at boot on a system that doesn't get to userspace, you can boot with: trace_event=syscalls tp_printk=on Which will trace syscalls from boot, and echo all output to the console. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Rename _TIF_SYSCALL_T_OR_A to _TIF_SYSCALL_DOTRACEMichael Ellerman2015-01-231-3/+3
| | | | | | | | | | | Once upon a time, at least 9 years ago (< 2.6.12), _TIF_SYSCALL_T_OR_A meant "TRACE or AUDIT". But these days it means TRACE or AUDIT or SECCOMP or TRACEPOINT or NOHZ. All of those are implemented via syscall_dotrace() so rename the flag to that to try and clarify things. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/ftrace: Fix obsolete commentJiri Slaby2014-11-091-1/+1
| | | | | | | | CONFIG_MCOUNT is not defined anymore, the corresponding #ifdef there is CONFIG_FUNCTION_TRACER. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/ftrace: simplify prepare_ftrace_returnAnton Blanchard2014-11-091-2/+8
| | | | | | | | | | | | | | | | | Instead of passing in the stack address of the link register to be modified, just pass in the old value and return the new value and rely on ftrace_graph_caller to do the modification. This removes the exception handling around the stack update - it isn't needed and we weren't consistent about it. Later on we would do an unprotected modification: if (!ftrace_graph_entry(&trace)) { *parent = old; Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32bit:Store temporary result in r0 instead of r8Priyanka Jain2013-06-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a9c4e541ea9b22944da356f2a9258b4eddcc953b "powerpc/kprobe: Complete kprobe and migrate exception frame" introduced a regression: While returning from exception handling in case of PREEMPT enabled, _TIF_NEED_RESCHED bit is checked in TI_FLAGS (thread_info flag) of current task. Only if this bit is set, it should continue with the process of calling preempt_schedule_irq() to schedule highest priority task if available. Current code assumes that r8 contains TI_FLAGS and check this for _TIF_NEED_RESCHED, but as r8 is modified in the code which executes before this check, r8 no longer contains the expected TI_FLAGS information. As a result check for comparison with _TIF_NEED_RESCHED was failing even if NEED_RESCHED bit is set in the current thread_info flag. Due to this, preempt_schedule_irq() and in turn scheduler was not getting called even if highest priority task is ready for execution. So, store temporary results in r0 instead of r8 to prevent r8 from getting modified as subsequent code is dependent on its value. Signed-off-by: Priyanka Jain <Priyanka.Jain@freescale.com> CC: <stable@vger.kernel.org> [v3.7+] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Fix MAX_STACK_TRACE_ENTRIES too low warning againLi Zhong2013-05-141-2/+0
| | | | | | | | | | | Saw this warning again, and this time from the ret_from_fork path. It seems we could clear the back chain earlier in copy_thread(), which could cover both path, and also fix potential lockdep usage in schedule_tail(), or exception occurred before we clear the back chain. Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>