summaryrefslogtreecommitdiffstats
path: root/arch/arm64 (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2016-10-0386-892/+1421
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "It's a bit all over the place this time with no "killer feature" to speak of. Support for mismatched cache line sizes should help people seeing whacky JIT failures on some SoCs, and the big.LITTLE perf updates have been a long time coming, but a lot of the changes here are cleanups. We stray outside arch/arm64 in a few areas: the arch/arm/ arch_timer workaround is acked by Russell, the DT/OF bits are acked by Rob, the arch_timer clocksource changes acked by Marc, CPU hotplug by tglx and jump_label by Peter (all CC'd). Summary: - Support for execute-only page permissions - Support for hibernate and DEBUG_PAGEALLOC - Support for heterogeneous systems with mismatches cache line sizes - Errata workarounds (A53 843419 update and QorIQ A-008585 timer bug) - arm64 PMU perf updates, including cpumasks for heterogeneous systems - Set UTS_MACHINE for building rpm packages - Yet another head.S tidy-up - Some cleanups and refactoring, particularly in the NUMA code - Lots of random, non-critical fixes across the board" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (100 commits) arm64: tlbflush.h: add __tlbi() macro arm64: Kconfig: remove SMP dependence for NUMA arm64: Kconfig: select OF/ACPI_NUMA under NUMA config arm64: fix dump_backtrace/unwind_frame with NULL tsk arm/arm64: arch_timer: Use archdata to indicate vdso suitability arm64: arch_timer: Work around QorIQ Erratum A-008585 arm64: arch_timer: Add device tree binding for A-008585 erratum arm64: Correctly bounds check virt_addr_valid arm64: migrate exception table users off module.h and onto extable.h arm64: pmu: Hoist pmu platform device name arm64: pmu: Probe default hw/cache counters arm64: pmu: add fallback probe table MAINTAINERS: Update ARM PMU PROFILING AND DEBUGGING entry arm64: Improve kprobes test for atomic sequence arm64/kvm: use alternative auto-nop arm64: use alternative auto-nop arm64: alternative: add auto-nop infrastructure arm64: lse: convert lse alternatives NOP padding to use __nops arm64: barriers: introduce nops and __nops macros for NOP sequences arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s ...
| * arm64: tlbflush.h: add __tlbi() macroMark Rutland2016-09-281-8/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As with dsb() and isb(), add a __tlbi() helper so that we can avoid distracting asm boilerplate every time we want a TLBI. As some TLBI operations take an argument while others do not, some pre-processor is used to handle these two cases with different assembly blocks. The existing tlbflush.h code is moved over to use the helper. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> [ rename helper to __tlbi, update comment and commit log ] Signed-off-by: Punit Agrawal <punit.agrawal@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Kconfig: remove SMP dependence for NUMAKefeng Wang2016-09-281-1/+0
| | | | | | | | | | | | | | | | The arm64 forces CONFIG_SMP=y with commit 4b3dc9679cf7, no need to add SMP dependence for NUMA. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Kconfig: select OF/ACPI_NUMA under NUMA configKefeng Wang2016-09-261-1/+2
| | | | | | | | | | | | | | | | Move OF_NUMA select under NUMA config, and select ACPI_NUMA when ACPI enabled. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: fix dump_backtrace/unwind_frame with NULL tskMark Rutland2016-09-262-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some places, dump_backtrace() is called with a NULL tsk parameter, e.g. in bug_handler() in arch/arm64, or indirectly via show_stack() in core code. The expectation is that this is treated as if current were passed instead of NULL. Similar is true of unwind_frame(). Commit a80a0eb70c358f8c ("arm64: make irq_stack_ptr more robust") didn't take this into account. In dump_backtrace() it compares tsk against current *before* we check if tsk is NULL, and in unwind_frame() we never set tsk if it is NULL. Due to this, we won't initialise irq_stack_ptr in either function. In dump_backtrace() this results in calling dump_mem() for memory immediately above the IRQ stack range, rather than for the relevant range on the task stack. In unwind_frame we'll reject unwinding frames on the IRQ stack. In either case this results in incomplete or misleading backtrace information, but is not otherwise problematic. The initial percpu areas (including the IRQ stacks) are allocated in the linear map, and dump_mem uses __get_user(), so we shouldn't access anything with side-effects, and will handle holes safely. This patch fixes the issue by having both functions handle the NULL tsk case before doing anything else with tsk. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Fixes: a80a0eb70c358f8c ("arm64: make irq_stack_ptr more robust") Acked-by: James Morse <james.morse@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yang Shi <yang.shi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm/arm64: arch_timer: Use archdata to indicate vdso suitabilityScott Wood2016-09-233-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | Instead of comparing the name to a magic string, use archdata to explicitly communicate whether the arch timer is suitable for direct vdso access. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: arch_timer: Work around QorIQ Erratum A-008585Scott Wood2016-09-231-3/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Erratum A-008585 says that the ARM generic timer counter "has the potential to contain an erroneous value for a small number of core clock cycles every time the timer value changes". Accesses to TVAL (both read and write) are also affected due to the implicit counter read. Accesses to CVAL are not affected. The workaround is to reread TVAL and count registers until successive reads return the same value. Writes to TVAL are replaced with an equivalent write to CVAL. The workaround is to reread TVAL and count registers until successive reads return the same value, and when writing TVAL to retry until counter reads before and after the write return the same value. The workaround is enabled if the fsl,erratum-a008585 property is found in the timer node in the device tree. This can be overridden with the clocksource.arm_arch_timer.fsl-a008585 boot parameter, which allows KVM users to enable the workaround until a mechanism is implemented to automatically communicate this information. This erratum can be found on LS1043A and LS2080A. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Scott Wood <oss@buserror.net> [will: renamed read macro to reflect that it's not usually unstable] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Correctly bounds check virt_addr_validLaura Abbott2016-09-221-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | virt_addr_valid is supposed to return true if and only if virt_to_page returns a valid page structure. The current macro does math on whatever address is given and passes that to pfn_valid to verify. vmalloc and module addresses can happen to generate a pfn that 'happens' to be valid. Fix this by only performing the pfn_valid check on addresses that have the potential to be valid. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: migrate exception table users off module.h and onto extable.hPaul Gortmaker2016-09-203-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: pmu: Hoist pmu platform device nameJeremy Linton2016-09-161-1/+1
| | | | | | | | | | | | | | | | Move the PMU name into a common header file so it may be referenced by other users. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: pmu: Probe default hw/cache countersJeremy Linton2016-09-161-4/+41
| | | | | | | | | | | | | | | | | | | | | | ARMv8 machines can identify the micro/arch defined counters that are available on a machine. Add all these counters to the default armv8 perf map. At run-time disable the counters which are not available on the given PMU. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: pmu: add fallback probe tableMark Salter2016-09-161-1/+12
| | | | | | | | | | | | | | | | | | | | In preparation for ACPI support, add a pmu_probe_info table to the arm_pmu_device_probe() call. This table gets used when probing in the absence of a devicetree node for PMU. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Improve kprobes test for atomic sequenceDavid A. Long2016-09-151-25/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kprobes searches backwards a finite number of instructions to determine if there is an attempt to probe a load/store exclusive sequence. It stops when it hits the maximum number of instructions or a load or store exclusive. However this means it can run up past the beginning of the function and start looking at literal constants. This has been shown to cause a false positive and blocks insertion of the probe. To fix this, further limit the backwards search to stop if it hits a symbol address from kallsyms. The presumption is that this is the entry point to this code (particularly for the common case of placing probes at the beginning of functions). This also improves efficiency by not searching code that is not part of the function. There may be some possibility that the label might not denote the entry path to the probed instruction but the likelihood seems low and this is just another example of how the kprobes user really needs to be careful about what they are doing. Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/kvm: use alternative auto-nopMark Rutland2016-09-122-12/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make use of the new alternative_if and alternative_else_nop_endif and get rid of our open-coded NOP sleds, making the code simpler to read. Note that for __kvm_call_hyp the branch to __vhe_hyp_call has been moved out of the alternative sequence, and in the default case there will be four additional NOPs executed. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: kvmarm@lists.cs.columbia.edu Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: use alternative auto-nopMark Rutland2016-09-123-24/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make use of the new alternative_if and alternative_else_nop_endif and get rid of our homebew NOP sleds, making the code simpler to read. Note that for cpu_do_switch_mm the ret has been moved out of the alternative sequence, and in the default case there will be three additional NOPs executed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: alternative: add auto-nop infrastructureMark Rutland2016-09-121-17/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases, one side of an alternative sequence is simply a number of NOPs used to balance the other side. Keeping track of this manually is tedious, and the presence of large chains of NOPs makes the code more painful to read than necessary. To ameliorate matters, this patch adds a new alternative_else_nop_endif, which automatically balances an alternative sequence with a trivial NOP sled. In many cases, we would like a NOP-sled in the default case, and instructions patched in in the presence of a feature. To enable the NOPs to be generated automatically for this case, this patch also adds a new alternative_if, and updates alternative_else and alternative_endif to work with either alternative_if or alternative_endif. Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [will: use new nops macro to generate nop sequences] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: lse: convert lse alternatives NOP padding to use __nopsWill Deacon2016-09-093-56/+39
| | | | | | | | | | | | | | | | | | | | | | | | The LSE atomics are implemented using alternative code sequences of different lengths, and explicit NOP padding is used to ensure the patching works correctly. This patch converts the bulk of the LSE code over to using the __nops macro, which makes it slightly clearer as to what is going on and also consolidates all of the padding at the end of the various sequences. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: barriers: introduce nops and __nops macros for NOP sequencesWill Deacon2016-09-092-0/+12
| | | | | | | | | | | | | | | | | | | | | | NOP sequences tend to get used for padding out alternative sections and uarch-specific pipeline flushes in errata workarounds. This patch adds macros for generating these sequences as both inline asm blocks, but also as strings suitable for embedding in other asm blocks directly. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_sWill Deacon2016-09-092-5/+16
| | | | | | | | | | | | | | | | | | | | Similar to our {read,write}_sysreg accessors for architected, named system registers, this patch introduces {read,write}_sysreg_s variants that can take arbitrary sys_reg output and therefore access IMPDEF registers or registers that unsupported by binutils. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Remove shadowed asm-generic headersRobin Murphy2016-09-091-4/+0
| | | | | | | | | | | | | | | | | | | | We've grown our own versions of bug.h, ftrace.h, pci.h and topology.h, so generating the generic ones as well is unnecessary and a potential source of build hiccups. At the very least, having them present has confused my source-indexing tool, and that simply will not do. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Work around systems with mismatched cache line sizesSuzuki K Poulose2016-09-097-3/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Systems with differing CPU i-cache/d-cache line sizes can cause problems with the cache management by software when the execution is migrated from one to another. Usually, the application reads the cache size on a CPU and then uses that length to perform cache operations. However, if it gets migrated to another CPU with a smaller cache line size, things could go completely wrong. To prevent such cases, always use the smallest cache line size among the CPUs. The kernel CPU feature infrastructure already keeps track of the safe value for all CPUID registers including CTR. This patch works around the problem by : For kernel, dynamically patch the kernel to read the cache size from the system wide copy of CTR_EL0. For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT) and emulate the mrs instruction to return the system wide safe value of CTR_EL0. For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0 via read_system_reg), we keep track of the pointer to table entry for CTR_EL0 in the CPU feature infrastructure. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Refactor sysinstr exception handlingSuzuki K Poulose2016-09-092-35/+114
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now we trap some of the user space data cache operations based on a few Errata (ARM 819472, 826319, 827319 and 824069). We need to trap userspace access to CTR_EL0, if we detect mismatched cache line size. Since both these traps share the EC, refactor the handler a little bit to make it a bit more reader friendly. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Introduce raw_{d,i}cache_line_sizeSuzuki K Poulose2016-09-093-6/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On systems with mismatched i/d cache min line sizes, we need to use the smallest size possible across all CPUs. This will be done by fetching the system wide safe value from CPU feature infrastructure. However the some special users(e.g kexec, hibernate) would need the line size on the CPU (rather than the system wide), when either the system wide feature may not be accessible or it is guranteed that the caller executes with a gurantee of no migration. Provide another helper which will fetch cache line size on the current CPU. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: James Morse <james.morse@arm.com> Reviewed-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: alternative: Add support for patching adrp instructionsSuzuki K Poulose2016-09-091-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | adrp uses PC-relative address offset to a page (of 4K size) of a symbol. If it appears in an alternative code patched in, we should adjust the offset to reflect the address where it will be run from. This patch adds support for fixing the offset for adrp instructions. Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: insn: Add helpers for adrp offsetsSuzuki K Poulose2016-09-092-1/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | Adds helpers for decoding/encoding the PC relative addresses for adrp. This will be used for handling dynamic patching of 'adrp' instructions in alternative code patching. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: alternative: Disallow patching instructions using literalsSuzuki K Poulose2016-09-091-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The alternative code patching doesn't check if the replaced instruction uses a pc relative literal. This could cause silent corruption in the instruction stream as the instruction will be executed from a different address than what it was compiled for. Catch all such cases. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Rearrange CPU errata workaround checksSuzuki K Poulose2016-09-094-15/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now we run through the work around checks on a CPU from __cpuinfo_store_cpu. There are some problems with that: 1) We initialise the system wide CPU feature registers only after the Boot CPU updates its cpuinfo. Now, if a work around depends on the variance of a CPU ID feature (e.g, check for Cache Line size mismatch), we have no way of performing it cleanly for the boot CPU. 2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It is not an obvious place for that. This patch rearranges the CPU specific capability(aka work around) checks. 1) At the moment we use verify_local_cpu_capabilities() to check if a new CPU has all the system advertised features. Use this for the secondary CPUs to perform the work around check. For that we rename verify_local_cpu_capabilities() => check_local_cpu_capabilities() which: If the system wide capabilities haven't been initialised (i.e, the CPU is activated at the boot), update the system wide detected work arounds. Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the system wide capabilities. 2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have initialised the system wide CPU feature values. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Use consistent naming for errata handlingSuzuki K Poulose2016-09-094-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a cosmetic change to rename the functions dealing with the errata work arounds to be more consistent with their naming. 1) check_local_cpu_errata() => update_cpu_errata_workarounds() check_local_cpu_errata() actually updates the system's errata work arounds. So rename it to reflect the same. 2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds() Use errata_workarounds instead of _errata. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Set the safe value for L1 icache policySuzuki K Poulose2016-09-092-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is not defined at the moment. The safer value for the L1Ip should be the weakest of the policies, which happens to be AIVIVT. While at it, fix the comment about safe_val. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/numa: remove the limitation that cpu0 must bind to node0Zhen Lei2016-09-092-6/+11
| | | | | | | | | | | | | | | | | | 1. Remove the old binding code. 2. Read the nid of cpu0 from dts. 3. Fallback the nid of cpu0 to 0 when numa=off is set in bootargs. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/numa: remove some useless codeZhen Lei2016-09-091-4/+0
| | | | | | | | | | | | | | | | | | | | When the deleted code is executed, only the bit of cpu0 was set on cpu_possible_mask. So that, only set_cpu_numa_node(0, NUMA_NO_NODE); will be executed. And map_cpu_to_node(0, 0) will soon be called. So these code can be safely removed. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/numa: support HAVE_SETUP_PER_CPU_AREAZhen Lei2016-09-092-0/+60
| | | | | | | | | | | | | | | | | | To make each percpu area allocated from its local numa node. Without this patch, all percpu areas will be allocated from the node which cpu0 belongs to. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: numa: Use pr_fmt()Kefeng Wang2016-09-091-19/+18
| | | | | | | | | | | | | | | | Use pr_fmt to prefix kernel output, and remove duplicated msg of NUMA turned off. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/numa: avoid inconsistent information to be printedZhen Lei2016-09-092-4/+6
| | | | | | | | | | | | | | | | | | numa_init may return error because of numa configuration error. So "No NUMA configuration found" is inaccurate. In fact, specific configuration error information should be immediately printed by the testing branch. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: perf: move to common attr_group fieldsMark Rutland2016-09-091-12/+24
| | | | | | | | | | | | | | | | | | By using a common attr_groups array, the common arm_pmu code can set up common files (e.g. cpumask) for us in subsequent patches. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: simplify contextidr_thread_switchMark Rutland2016-09-091-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When CONFIG_PID_IN_CONTEXTIDR is not selected, we use an empty stub definition of contextidr_thread_switch(). As everything we rely upon exists regardless of CONFIG_PID_IN_CONTEXTIDR, we don't strictly require an empty stub. By using IS_ENABLED() rather than ifdeffery, we avoid duplication, and get compiler coverage on all the code even when CONFIG_PID_IN_CONTEXTIDR is not selected and the code is optimised away. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: simplify sysreg manipulationMark Rutland2016-09-099-52/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A while back we added {read,write}_sysreg accessors to handle accesses to system registers, without the usual boilerplate asm volatile, temporary variable, etc. This patch makes use of these across arm64 to make code shorter and clearer. For sequences with a trailing ISB, the existing isb() macro is also used so that asm blocks can be removed entirely. A few uses of inline assembly for msr/mrs are left as-is. Those manipulating sp_el0 for the current thread_info value have special clobber requiremends. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/kvm: use {read,write}_sysreg()Mark Rutland2016-09-093-28/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A while back we added {read,write}_sysreg accessors to handle accesses to system registers, without the usual boilerplate asm volatile, temporary variable, etc. This patch makes use of these in the arm64 KVM code to make the code shorter and clearer. At the same time, a comment style violation next to a system register access is fixed up in reset_pmcr, and comments describing whether operations are reads or writes are removed as this is now painfully obvious. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: dcc: simplify accessorsMark Rutland2016-09-091-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A while back we added {read,write}_sysreg accessors to handle accesses to system registers, without the usual boilerplate asm volatile, temporary variable, etc. This patch makes use of these in the arm64 DCC accessors to make the code shorter and clearer. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: arch_timer: simplify accessorsMark Rutland2016-09-091-27/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A while back we added {read,write}_sysreg accessors to handle accesses to system registers, without the usual boilerplate asm volatile, temporary variable, etc. This patch makes use of these in the arm64 arch timer accessors to make the code shorter and clearer. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: sysreg: allow write_sysreg to use XZRMark Rutland2016-09-091-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently write_sysreg has to allocate a temporary register to write zero to a system register, which is unfortunate given that the MSR instruction accepts XZR as an operand. Allow XZR to be used when appropriate by fiddling with the assembly constraints. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/io: Allow I/O writes to use {W,X}ZRRobin Murphy2016-09-081-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When zeroing an I/O location, the current accessors are forced to allocate a temporary register to store the zero for the write. By tweaking the assembly constraints, we can allow the compiler to use the zero register directly in such cases, and save some juggling. Compiling a representative kernel configuration with GCC 6 shows that 2.3KB worth of code can be wasted just on that! text data bss dec hex filename 13316776 3248256 18176769 34741801 2121e29 vmlinux.o.new 13319140 3248256 18176769 34744165 2122765 vmlinux.o.old Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Use static keys for CPU featuresCatalin Marinas2016-09-073-3/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds static keys transparently for all the cpu_hwcaps features by implementing an array of default-false static keys and enabling them when detected. The cpus_have_cap() check uses the static keys if the feature being checked is a constant, otherwise the compiler generates the bitmap test. Because of the early call to static_branch_enable() via check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels are initialised in cpuinfo_store_boot_cpu(). Cc: Will Deacon <will.deacon@arm.com> Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: mm: drop fixup_init() and mm.hKefeng Wang2016-09-065-21/+7
| | | | | | | | | | | | | | | | | | | | There is only fixup_init() in mm.h , and it is only called in free_initmem(), so move the codes from fixup_init() into free_initmem(), then drop fixup_init() and mm.h. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: ftrace: add save_stack_trace_regs()Pratyush Anand2016-09-051-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, enabling stacktrace of a kprobe events generates warning: echo stacktrace > /sys/kernel/debug/tracing/trace_options echo "p xhci_irq" > /sys/kernel/debug/tracing/kprobe_events echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable save_stack_trace_regs() not implemented yet. ------------[ cut here ]------------ WARNING: CPU: 1 PID: 0 at ../kernel/stacktrace.c:74 save_stack_trace_regs+0x3c/0x48 Modules linked in: CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.8.0-rc4-dirty #5128 Hardware name: ARM Juno development board (r1) (DT) task: ffff800975dd1900 task.stack: ffff800975ddc000 PC is at save_stack_trace_regs+0x3c/0x48 LR is at save_stack_trace_regs+0x3c/0x48 pc : [<ffff000008126c64>] lr : [<ffff000008126c64>] pstate: 600003c5 sp : ffff80097ef52c00 Call trace: save_stack_trace_regs+0x3c/0x48 __ftrace_trace_stack+0x168/0x208 trace_buffer_unlock_commit_regs+0x5c/0x7c kprobe_trace_func+0x308/0x3d8 kprobe_dispatcher+0x58/0x60 kprobe_breakpoint_handler+0xbc/0x18c brk_handler+0x50/0x90 do_debug_exception+0x50/0xbc This patch implements save_stack_trace_regs(), so that stacktrace of a kprobe events can be obtained. After this patch, there is no warning and we can see the stacktrace for kprobe events in trace buffer. more /sys/kernel/debug/tracing/trace <idle>-0 [004] d.h. 1356.000496: p_xhci_irq_0:(xhci_irq+0x0/0x9ac) <idle>-0 [004] d.h. 1356.000497: <stack trace> => xhci_irq => __handle_irq_event_percpu => handle_irq_event_percpu => handle_irq_event => handle_fasteoi_irq => generic_handle_irq => __handle_domain_irq => gic_handle_irq => el1_irq => arch_cpu_idle => default_idle_call => cpu_startup_entry => secondary_start_kernel => Tested-by: David A. Long <dave.long@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: kernel: re-export _cpu_resume() from sleep.SArd Biesheuvel2016-09-051-1/+1
| | | | | | | | | | | | | | | | | | Commit b5fe242972ef ("arm64: kernel: fix style issues in sleep.S") changed the linkage of _cpu_resume() to local, even though the symbol is also referenced from hibernate.c. So revert this change. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: Drop generic xlate_dev_mem_{k,}ptr()James Morse2016-09-051-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | The code that provides /dev/mem uses xlate_dev_mem_{k,}ptr() to avoid making a cachable mapping of a non-cachable area on ia64. On arm64 we do this via phys_mem_access_prot() instead, but provide dummy versions of xlate_dev_mem_{k,}ptr(). These are the same as those in asm-generic/io.h, which we include from asm/io.h Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: debug: report TRAP_TRACE instead of TRAP_HWBRPT for singlestepWill Deacon2016-09-021-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Single-step traps to userspace (e.g. via ptrace) are expected to use the TRAP_TRACE for the si_code field of the siginfo, as opposed to TRAP_HWBRPT that we report currently. Fix the reported value, which has no effect on existing and legacy builds of GDB. Reported-by: Yao Qi <yao.qi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: head.S: document the use of callee saved registersArd Biesheuvel2016-09-021-0/+10
| | | | | | | | | | | | | | | | | | | | Now that the only remaining occurrences of the use of callee saved registers are on the primary boot path, add a comment to the code which register is used for what. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: head.S: use ordinary stack frame for __primary_switched()Ard Biesheuvel2016-09-021-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of stashing the value of the link register in x28 before setting up the stack and calling into C code, create an ordinary PCS compatible stack frame so that we can push the return address onto the stack. Since exception handlers require a stack as well, assign the stack pointer register before installing the vector table. Note that this accounts for the difference between THREAD_START_SP and THREAD_SIZE, given that the stack pointer is always decremented before calling into any C code. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>