summaryrefslogtreecommitdiffstats
path: root/arch (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2013-05-0394-1210/+2378
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM updates from Russell King: "The major items included in here are: - MCPM, multi-cluster power management, part of the infrastructure required for ARMs big.LITTLE support. - A rework of the ARM KVM code to allow re-use by ARM64. - Error handling cleanups of the IS_ERR_OR_NULL() madness and fixes of that stuff for arch/arm - Preparatory patches for Cortex-M3 support from Uwe Kleine-König. There is also a set of three patches in here from Hugh/Catalin to address freeing of inappropriate page tables on LPAE. You already have these from akpm, but they were already part of my tree at the time he sent them, so unfortunately they'll end up with duplicate commits" * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (77 commits) ARM: EXYNOS: remove unnecessary use of IS_ERR_VALUE() ARM: IMX: remove unnecessary use of IS_ERR_VALUE() ARM: OMAP: use consistent error checking ARM: cleanup: OMAP hwmod error checking ARM: 7709/1: mcpm: Add explicit AFLAGS to support v6/v7 multiplatform kernels ARM: 7700/2: Make cpu_init() notrace ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZE ARM: 7701/1: mm: Allow arch code to control the user page table ceiling ARM: 7703/1: Disable preemption in broadcast_tlb*_a15_erratum() ARM: mcpm: provide an interface to set the SMP ops at run time ARM: mcpm: generic SMP secondary bringup and hotplug support ARM: mcpm_head.S: vlock-based first man election ARM: mcpm: Add baremetal voting mutexes ARM: mcpm: introduce helpers for platform coherency exit/setup ARM: mcpm: introduce the CPU/cluster power API ARM: multi-cluster PM: secondary kernel entry code ARM: cacheflush: add synchronization helpers for mixed cache state accesses ARM: cpu hotplug: remove majority of cache flushing from platforms ARM: smp: flush L1 cache in cpu_die() ARM: tegra: remove tegra specific cpu_disable() ...
| * Merge branch 'cleanup' into for-linusRussell King2013-05-0219-60/+51
| |\ | | | | | | | | | | | | Conflicts: arch/arm/plat-omap/dmtimer.c
| | * ARM: EXYNOS: remove unnecessary use of IS_ERR_VALUE()Russell King2013-05-021-1/+1
| | | | | | | | | | | | | | | | | | | | | s5p_register_gpio_interrupt() returns 0 or positive for success, and -ve for errors, so just use the standard >= 0 test. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: IMX: remove unnecessary use of IS_ERR_VALUE()Russell King2013-05-021-1/+1
| | | | | | | | | | | | | | | | | | | | | device_register() returns -ve values for errors, and zero for success. There's no need to obfuscate the code with IS_ERR_VALUE(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: OMAP: use consistent error checkingRussell King2013-05-028-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Consistently check errors using the usual method used in the kernel for much of its history. For instance: int gpmc_cs_set_timings(int cs, const struct gpmc_timings *t) { int div; div = gpmc_calc_divider(t->sync_clk); if (div < 0) return div; static int gpmc_set_async_mode(int cs, struct gpmc_timings *t) { ... return gpmc_cs_set_timings(cs, t); ..... ret = gpmc_set_async_mode(gpmc_onenand_data->cs, &t); if (IS_ERR_VALUE(ret)) return ret; So, gpmc_cs_set_timings() thinks any negative return value is an error, but where we check that in higher levels, only a limited range are errors... There is only _one_ use of IS_ERR_VALUE() in arch/arm which is really appropriate, and that is in arch/arm/include/asm/syscall.h: static inline long syscall_get_error(struct task_struct *task, struct pt_regs *regs) { unsigned long error = regs->ARM_r0; return IS_ERR_VALUE(error) ? error : 0; } because this function really does have to differentiate between error return values and addresses which look like negative numbers (eg, from mmap()). So, here's a patch to remove them from OMAP, except for the above. Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup: OMAP hwmod error checkingRussell King2013-05-021-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | omap_hwmod_lookup() only returns NULL on error, never an error pointer. Checking the returned pointer using IS_ERR_OR_NULL() is needless overhead. Use a simple !ptr check instead. OMAP devices (oh->od) always have a valid platform device attached (see omap_device_alloc()) so there's no point validating the platform device pointer (we will have already oopsed long before if this is not the case here.) Lastly, oh->od is only ever NULL or a valid omap device pointer - 'oh' comes from the statically declared hwmod tables, and the pointer is only filled in by omap_device_alloc() at a point where the omap device pointer must be valid. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup: pwrdm_can_ever_lose_context() checkingRussell King2013-02-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pwrdm_can_ever_lose_context() is only ever called from the OMAP GPIO code, and only with a pointer returned from omap_hwmod_get_pwrdm(). omap_hwmod_get_pwrdm() only ever returns NULL on error, so using IS_ERR_OR_NULL() to validate the passed pointer is silly. Use a simpler !ptr check instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup: debugfs error handlingRussell King2013-02-241-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Debugfs functions return NULL when they fail, or an error pointer when not configured. The intention behind the error pointer is that it appears as a valid pointer to the caller, and so the caller continues inspite of debugfs not being available. Debugfs failure should only ever be checked with (!ptr) and not the IS_ERR*() functions. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup: clk_get_sys() error handlingRussell King2013-02-241-1/+1
| | | | | | | | | | | | | | | | | | | | | Fix clk_get_sys() error handling; IS_ERR() should be used rather than IS_ERR_OR_NULL() to check for errors. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup: regulator_get() error handlingRussell King2013-02-241-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | regulator_get() does not return NULL as an error value. Even when it does return an error, the code as written falls out the error path while returning zero (indicating no failure.) Fix this, and use the more correct IS_ERR() macro to check for errors. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup: clk_get() error handlingRussell King2013-02-241-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | Use the correct IS_ERR() to determine if clk_get() returned an error. Set timer->fclk to be an error value initially, and check everywhere using IS_ERR(). This keeps the range of valid values for 'struct clk' consistent. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup: soc_device_register() error checkingRussell King2013-02-243-12/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | soc_device_register() never returns NULL, it only ever returns an error pointer or a valid pointer. Use the right function (IS_ERR()) to check this. soc_device_to_device() only ever returns &soc_dev->dev, and so can never return an error or NULL if the pointer passed into it was valid, so there's no point checking its return. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup gate_vma initializationRussell King2013-02-231-6/+7
| | | | | | | | | | | | | | | | | | | | | Three's no need to have code initializing this by hand; it's more efficient to initialize the constant structure members directly. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: cleanup undefined instruction entry codeRussell King2013-02-231-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | We don't need to keep reloading the thread into into r10 - we can do this once and keep the value cached in the register. Also, schedule some instructions better so that the pipeline doesn't stall after a load in the neon code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | |
| | \
| | \
| | \
| | \
| | \
| | \
| | \
| *-------. \ Merge branches 'devel-stable', 'entry', 'fixes', 'mach-types', 'misc' and ↵Russell King2013-05-02268-1982/+2660
| |\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | 'smp-hotplug' into for-linus
| | | | | | * \ Merge commit '73053d973' into smp-hotplugRussell King2013-05-021-4/+5
| | | | | | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is to fix a merge problem with mach-highbank/hotplug.c, which git silently resolves, but wrongly. This commit contains the correct resolution. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | | * | | ARM: cpu hotplug: remove majority of cache flushing from platformsRussell King2013-04-1812-35/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the majority of cache flushing calls from the individual platform files. This is now handled by the core code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | | * | | ARM: smp: flush L1 cache in cpu_die()Russell King2013-04-181-4/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Flush the L1 cache for the CPU which is going down in cpu_die() so that we don't end up with all platforms doing this. This ensures that any cache lines we own are pushed out before the cache becomes inaccessible. We may end up subsequently creating some dirty cache lines - for example, with the complete() call, but this update must become visible to other CPUs before __cpu_die() can proceed. Subsequent accesses from the platforms cpu_die() function should _not_ matter. Also place a mb() after the complete() call to ensure that this is visible to other CPUs. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | | * | | ARM: tegra: remove tegra specific cpu_disable()Russell King2013-04-183-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tegra cpu_disable() function is the same as the generic version in arch/arm/kernel/smp.c. Therefore, it can be removed. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZECatalin Marinas2013-04-251-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM processors with LPAE enabled use 3 levels of page tables, with an entry in the top level (pgd) covering 1GB of virtual space. Because of the branch relocation limitations on ARM, the loadable modules are mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared between kernel modules and user space. If free_pgtables() is called with the default ceiling 0, free_pgd_range() (and subsequently called functions) also frees the page table shared between user space and kernel modules (which is normally handled by the ARM-specific pgd_free() function). This patch changes defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE is enabled. Note that the pgd_free() function already checks the presence of the shared pmd page allocated by pgd_alloc() and frees it, though with ceiling 0 this wasn't necessary. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@vger.kernel.org> # 3.3+ Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7695/1: mvebu: Enable pj4b on LPAE compilationsGregory CLEMENT2013-04-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pj4b cpus are LPAE capable so enable them on LPAE compilations Signed-off-by: Lior Amsalem <alior@marvell.com> Tested-by: Franklin <flin@marvell.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7693/1: mm: clean-up in order to reduce to call kmap_high_get()Joonsoo Kim2013-04-172-13/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In kmap_atomic(), kmap_high_get() is invoked for checking already mapped area. In __flush_dcache_page() and dma_cache_maint_page(), we explicitly call kmap_high_get() before kmap_atomic() when cache_is_vipt(), so kmap_high_get() can be invoked twice. This is useless operation, so remove one. v2: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to be self-documented Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP insteadWill Deacon2013-04-035-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many ARMv7 cores have hardware page table walkers that can read the L1 cache. This is discoverable from the ID_MMFR3 register, although this can be expensive to access from the low-level set_pte functions and is a pain to cache, particularly with multi-cluster systems. A useful observation is that the multi-processing extensions for ARMv7 require coherent table walks, meaning that we can make use of ALT_SMP patching in proc-v7-* to patch away the cache flush safely for these cores. Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7688/1: add support for context tracking subsystemKevin Hilman2013-04-035-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 91d1aa43 (context_tracking: New context tracking susbsystem) generalized parts of the RCU userspace extended quiescent state into the context tracking subsystem. Context tracking is then used to implement adaptive tickless (a.k.a extended nohz) To support the new context tracking subsystem on ARM, the user/kernel boundary transtions need to be instrumented. For exceptions and IRQs in usermode, the existing usr_entry macro is used to instrument the user->kernel transition. For the return to usermode path, the ret_to_user* path is instrumented. Using the usr_entry macro, this covers interrupts in userspace, data abort and prefetch abort exceptions in userspace as well as undefined exceptions in userspace (which is where FP emulation and VFP are handled.) For syscalls, the slow return path is covered by instrumenting the ret_to_user path. In addition, the syscall entry point is instrumented which covers the user->kernel transition for both fast and slow syscalls, and an additional instrumentation point is added for the fast syscall return path (ret_fast_syscall). Cc: Mats Liljegren <mats.liljegren@enea.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7687/1: atomics: don't use exclusives for atomic64 read/set with LPAEWill Deacon2013-04-031-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To ease page table updates with 64-bit descriptors, CPUs implementing LPAE are required to implement ldrd/strd as atomic operations. This patch uses these accessors instead of the exclusive variants when performing atomic64_{read,set} on LPAE systems. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7683/1: pci: add a align_resource hookThomas Petazzoni2013-04-032-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PCI specifications says that an I/O region must be aligned on a 4 KB boundary, and a memory region aligned on a 1 MB boundary. However, the Marvell PCIe interfaces rely on address decoding windows (which allow to associate a range of physical addresses with a given device). For PCIe memory windows, those windows are defined with a 1 MB granularity (which matches the PCI specs), but PCIe I/O windows can only be defined with a 64 KB granularity, so they have to be 64 KB aligned. We therefore need to tell the PCI core about this special alignement requirement. The PCI core already calls pcibios_align_resource() in the ARM PCI core, specifically for such purposes. So this patch extends the ARM PCI core so that it calls a ->align_resource() hook registered by the PCI driver, exactly like the existing ->map_irq() and ->swizzle() hooks. A particular PCI driver can register a align_resource() hook, and do its own specific alignement, depending on the specific constraints of the underlying hardware. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7676/1: fix a wrong value returned from CALLER_ADDRnKeun-O Park2013-03-191-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes return_address() return a correct value for CALLER_ADDRn. To have a correct value from CALLER_ADDRn, we need to fix three points. * The unwind_frame() does not update frame->lr but frame->pc for backtrace. So frame->pc is meaningful for backtrace. * data.level should be adjusted by adding 2 additional iteration levels. With the current +1 level adjustment, the result of CALLER_ADDR1 will be the same return address with CALLER_ADDR0. * The initialization of data.addr to NULL is needed. When unwind_fame() fails right after data.level reaches zero, the routine returns data.addr which has uninitialized garbage value. Signed-off-by: Sahara <keun-o.park@windriver.com> Reviewed-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7672/1: uncompress debug support for multiplatform buildShawn Guo2013-03-154-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of giving zero support of uncompress debug for multiplatform build, the patch turns uncompress debug into one part of DEBUG_LL support. When DEBUG_LL is turned on for a particular platform, uncompress debug works too for that platform. OMAP and Tegra are exceptions here. OMAP low-level debug code places data in the .data section, and that is not allowed in decompressor. And Tegra code has reference to variable that's unavailable in decompressor but only in kernel. That's why Kconfig symbol DEBUG_UNCOMPRESS controlling multiplatform uncompress debug support is defined with !DEBUG_OMAP2PLUS_UART && !DEBUG_TEGRA_UART. It creates arch/arm/boot/compressed/debug.S with CONFIG_DEBUG_LL_INCLUDE included there, implements a generic putc() using those macros, which will be built when DEBUG_UNCOMPRESS is defined. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: 7671/1: use Kconfig to select uncompress.hShawn Guo2013-03-153-7/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Following the approach handling DEBUG_LL inclusion, the patch creates a Kconfig symbol CONFIG_UNCOMPRESS_INCLUDE for choosing the correct uncompress header. For traditional build, mach/uncompress.h will be included in arch/arm/boot/compressed/misc.c. For multiplatform build, debug/uncompress.h which contains a suite of empty functions will be used. In this way, a platform with particular uncompress.h implementation could choose its own uncompress.h with this Kconfig option. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | * | | | | ARM: Update mach-typesRussell King2013-03-221-594/+397
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | | | ARM: 7700/2: Make cpu_init() notraceJon Medhurst2013-04-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On resume from CPU power down any trace hooks enabled in cpu_init() will get called before that function has done set_my_cpu_offset(), so any use of per-cpu variables by trace hook code will cause bad things to happen. Prevent this by marking the function notrace. This fixes lockups/crashes seen when enabling function tracer on TC2 with the not yet mainlined cpuidle driver. Signed-off-by: Jon Medhurst <tixy@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | * | | | | | ARM: 7703/1: Disable preemption in broadcast_tlb*_a15_erratum()Catalin Marinas2013-04-251-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 93dc688 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) introduces calls to smp_processor_id() and smp_call_function_many() with preemption enabled. This patch disables preemption and also optimises the smp_processor_id() call in broadcast_tlb_mm_a15_erratum(). The broadcast_tlb_a15_erratum() function is changed to use smp_call_function() which disables preemption. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Geoff Levand <geoff@infradead.org> Reported-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | | | ARM: entry: move disable_irq_notrace into svc_exitRussell King2013-04-032-18/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All svc exit paths need IRQs off. Rather than placing this before every user of svc_exit, combine it into this macro. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | | | ARM: entry: move IRQ tracing exit into svc_exitRussell King2013-04-032-28/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IRQ tracing exit path is much the same between all SVC mode exits, so move this into the svc_exit macro. Use a macro parameter to identify the IRQ case, which is the only different case there is. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | | | ARM: entry-common: get rid of unnecessary ifdefsRussell King2013-04-031-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The contents of the asm_trace_hardirqs_on is already conditional on CONFIG_TRACE_IRQFLAGS. There's little point also making the use of the macro conditional as well. Get rid of these ifdefs to make the code easier to read. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | | ARM: 7709/1: mcpm: Add explicit AFLAGS to support v6/v7 multiplatform kernelsDave Martin2013-05-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The full mcpm layer is not likely to be relevant to v6 based platforms, so a multiplatform kernel won't use that code if booted on v6 hardware. This patch modifies the AFLAGS for affected mcpm .S files to specify armv7-a explicitly for that code. Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | | Merge branch 'mcpm' of git://git.linaro.org/people/nico/linux into devel-stableRussell King2013-04-25152-350/+1479
| |\ \ \ \ \ \ \ \
| | * | | | | | | | ARM: mcpm: provide an interface to set the SMP ops at run timeNicolas Pitre2013-04-242-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is cleaner than exporting the mcpm_smp_ops structure. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Jon Medhurst <tixy@linaro.org>
| | * | | | | | | | ARM: mcpm: generic SMP secondary bringup and hotplug supportNicolas Pitre2013-04-242-1/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the cluster power API is in place, we can use it for SMP secondary bringup and CPU hotplug in a generic fashion. Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | | | ARM: mcpm_head.S: vlock-based first man electionDave Martin2013-04-243-6/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of requiring the first man to be elected in advance (which can be suboptimal in some situations), this patch uses a per- cluster mutex to co-ordinate selection of the first man. This should also make it more feasible to reuse this code path for asynchronous cluster resume (as in CPUidle scenarios). We must ensure that the vlock data doesn't share a cacheline with anything else, or dirty cache eviction could corrupt it. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | | | ARM: mcpm: Add baremetal voting mutexesDave Martin2013-04-242-0/+137
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a simple low-level voting mutex implementation to be used to arbitrate during first man selection when no load/store exclusive instructions are usable. For want of a better name, these are called "vlocks". (I was tempted to call them ballot locks, but "block" is way too confusing an abbreviation...) There is no function to wait for the lock to be released, and no vlock_lock() function since we don't need these at the moment. These could straightforwardly be added if vlocks get used for other purposes. For architectural correctness even Strongly-Ordered memory accesses require barriers in order to guarantee that multiple CPUs have a coherent view of the ordering of memory accesses. Whether or not this matters depends on hardware implementation details of the memory system. Since the purpose of this code is to provide a clean, generic locking mechanism with no platform-specific dependencies the barriers should be present to avoid unpleasant surprises on future platforms. Note: * When taking the lock, we don't care about implicit background memory operations and other signalling which may be pending, because those are not part of the critical section anyway. A DMB is sufficient to ensure correctly observed ordering if the explicit memory accesses in vlock_trylock. * No barrier is required after checking the election result, because the result is determined by the store to VLOCK_OWNER_OFFSET and is already globally observed due to the barriers in voting_end. This means that global agreement on the winner is guaranteed, even before the winner is known locally. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | | | ARM: mcpm: introduce helpers for platform coherency exit/setupDave Martin2013-04-244-2/+330
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This provides helper methods to coordinate between CPUs coming down and CPUs going up, as well as documentation on the used algorithms, so that cluster teardown and setup operations are not done for a cluster simultaneously. For use in the power_down() implementation: * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu) * __mcpm_outbound_enter_critical(unsigned int cluster) * __mcpm_outbound_leave_critical(unsigned int cluster) * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu) The power_up_setup() helper should do platform-specific setup in preparation for turning the CPU on, such as invalidating local caches or entering coherency. It must be assembler for now, since it must run before the MMU can be switched on. It is passed the affinity level for which initialization should be performed. Because the mcpm_sync_struct content is looked-up and modified with the cache enabled or disabled depending on the code path, it is crucial to always ensure proper cache maintenance to update main memory right away. The sync_cache_*() helpers are used to that end. Also, in order to prevent a cached writer from interfering with an adjacent non-cached writer, we ensure each state variable is located to a separate cache line. Thanks to Nicolas Pitre and Achin Gupta for the help with this patch. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | | | ARM: mcpm: introduce the CPU/cluster power APINicolas Pitre2013-04-242-0/+183
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the basic API used to handle the powering up/down of individual CPUs in a (multi-)cluster system. The platform specific backend implementation has the responsibility to also handle the cluster level power as well when the first/last CPU in a cluster is brought up/down. Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | | | ARM: multi-cluster PM: secondary kernel entry codeNicolas Pitre2013-04-245-0/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPUs in cluster based systems, such as big.LITTLE, have special needs when entering the kernel due to a hotplug event, or when resuming from a deep sleep mode. This is vectorized so multiple CPUs can enter the kernel in parallel without serialization. The mcpm prefix stands for "multi cluster power management", however this is usable on single cluster systems as well. Only the basic structure is introduced here. This will be extended with later patches. In order not to complexify things more than they currently have to, the planned work to make runtime adjusted MPIDR based indexing and dynamic memory allocation for cluster states is postponed to a later cycle. The MAX_NR_CLUSTERS and MAX_CPUS_PER_CLUSTER static definitions should be sufficient for those systems expected to be available in the near future. Signed-off-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | | | ARM: cacheflush: add synchronization helpers for mixed cache state accessesNicolas Pitre2013-04-241-0/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Algorithms used by the MCPM layer rely on state variables which are accessed while the cache is either active or inactive, depending on the code path and the active state. This patch introduces generic cache maintenance helpers to provide the necessary cache synchronization for such state variables to always hit main memory in an ordered way. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Dave Martin <dave.martin@linaro.org>
| * | | | | | | | | Merge branch 'kvm-arm-fixes' of git://github.com/columbia/linux-kvm-arm into ↵Russell King2013-03-1519-387/+585
| |\ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | devel-stable
| | * | | | | | | | | ARM: KVM: Fix length of mmio accessMarc Zyngier2013-03-071-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of hardcoding the maximum MMIO access to be 4 bytes, compare it to sizeof(unsigned long), which will do the right thing on both 32 and 64bit systems. Same thing for sign extention. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
| | * | | | | | | | | ARM: KVM: sanitize freeing of HYP page tablesMarc Zyngier2013-03-071-18/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of trying to free everything from PAGE_OFFSET to the top of memory, use the virt_addr_valid macro to check the upper limit. Also do the same for the vmalloc region where the IO mappings are allocated. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
| | * | | | | | | | | ARM: KVM: move kvm_handle_wfi to handle_exit.cMarc Zyngier2013-03-073-17/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It has little to do in emulate.c these days... Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
| | * | | | | | | | | ARM: KVM: change kvm_tlb_flush_vmid to kvm_tlb_flush_vmid_ipaMarc Zyngier2013-03-073-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | v8 is capable of invalidating Stage-2 by IPA, but v7 is not. Change kvm_tlb_flush_vmid() to take an IPA parameter, which is then ignored by the invalidation code (and nuke the whole TLB as it always did). This allows v8 to implement a more optimized strategy. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>