summaryrefslogtreecommitdiffstats
path: root/arch/arm/kernel/head.S (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'bsym' into for-nextRussell King2015-06-121-4/+4
|\ | | | | | | | | Conflicts: arch/arm/kernel/head.S
| * ARM: replace BSYM() with badr assembly macroRussell King2015-05-081-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BSYM() was invented to allow us to work around a problem with the assembler, where local symbols resolved by the assembler for the 'adr' instruction did not take account of their ISA. Since we don't want BSYM() used elsewhere, replace BSYM() with a new macro 'badr', which is like the 'adr' pseudo-op, but with the BSYM() mechanics integrated into it. This ensures that the BSYM()-ification is only used in conjunction with 'adr'. Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| |
| \
*-. \ Merge branches 'arnd-fixes', 'clk', 'misc', 'v7' and 'fixes' into for-nextRussell King2015-06-121-12/+32
|\ \ \ | |_|/ |/| |
| | * ARM: redo TTBR setup code for LPAERussell King2015-06-021-11/+31
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | Re-engineer the LPAE TTBR setup code. Rather than passing some shifted address in order to fit in a CPU register, pass either a full physical address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1). This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of cpu_set_ttbr() in the secondary CPU startup code path (which was there to re-set TTBR1 to the appropriate high physical address space on Keystone2.) Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: 8359/1: correct secondary_startup_arm modeYingjoe Chen2015-06-021-1/+1
|/ | | | | | | | | | | | | | secondary_startup_arm is used as ARM mode secondary start up function when ther kernel is compiled in THUMB mode, however the label itself is still in .thumb mode. readelf shows: 160979: c020a581 120 FUNC GLOBAL DEFAULT 2 secondary_startup_arm Make sure the label is in ARM mode as well. Signed-off-by: Yingjoe Chen <yingjoe.chen@mediatek.com> Tested-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8314/1: replace PROCINFO embedded branch with relative offsetArd Biesheuvel2015-03-281-7/+7
| | | | | | | | | | | | | | | This patch replaces the 'branch to setup()' instructions embedded in the PROCINFO structs with the offset to that setup function relative to the base of the struct. This preserves the position independent nature of that field, but uses a data item rather than an instruction. This is mainly done to prevent linker failures on large kernels, where the setup function is out of reach for the branch. Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8302/1: Add a secondary_startup that assumes ARM modeStephen Boyd2015-02-101-0/+7
| | | | | | | | | | Some platforms always enter the kernel in ARM mode even if the kernel is compiled for THUMB2. Add a small wrapper on top of secondary_startup() that switches into THUMB2 mode. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8291/1: replace magic number with PAGE_SHIFT macro in fixup_pv codeMasahiro Yamada2015-01-211-1/+1
| | | | | | | | | This line converts PHYS_OFFSET into PHYS_PFN_OFFSET. It is better to use PAGE_SHIFT rather than the magic number 12. Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King2014-07-181-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: Shawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branches 'alignment', 'fixes', 'l2c' (early part) and 'misc' into for-nextRussell King2014-06-051-1/+1
|\
| * ARM: 8028/1: move __fixup_smp out of init sectionRob Herring2014-05-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | With large kernel builds such as allyesconfig exceeding maximum relative branch offsets, the init section will be too far away to branch to directly. This causes veneers to be added by the linker, but veneers don't work before the MMU is enabled. Fix this by moving __fixup_smp to the .head.text section as it is not very big. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: 8033/1: fix big endian __pv_phys_pfn_offset size related issueVictor Kamensky2014-04-221-1/+1
|/ | | | | | | | | | | | | | | | | | | | | Fix e26a9e00afc482b971afcaef1db8c9034d4d6d7c 'ARM: Better virt_to_page() handling' replaced __pv_phys_offset with __pv_phys_pfn_offset. Also note that size of __pv_phys_offset was quad but size of __pv_phys_pfn_offset is word. Instruction that used to update __pv_phys_offset which address is in r6 had to update low word of __pv_phys_offset so it used #LOW_OFFSET macro for store offset. Now when size of __pv_phys_pfn_offset is word, no difference between little endian and big endian should exist - i.e no offset should be used when __pv_phys_pfn_offset is stored. Note that for little endian image proposed change is noop, since in little endian case #LOW_OFFSET is defined 0 anyway. Reported-by: Taras Kondratiuk <taras.kondratiuk@linaro.org> Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
*---. Merge branches 'amba', 'fixes', 'misc', 'mmci', 'unstable/omap-dma' and ↵Russell King2014-04-041-9/+10
|\ \ \ | | | | | | | | | | | | 'unstable/sa11x0' into for-next
| | * | ARM: Better virt_to_page() handlingRussell King2014-04-031-8/+9
| |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | virt_to_page() is incredibly inefficient when virt-to-phys patching is enabled. This is because we end up with this calculation: page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12] in assembly. The asm virt_to_phys() is equivalent this this operation: addr - PAGE_OFFSET + __pv_phys_offset and we can see that because this is assembly, the compiler has no chance to optimise some of that away. This should reduce down to: page = &mem_map[(addr - PAGE_OFFSET) >> 12] for the common cases. Permit the compiler to make this optimisation by giving it more of the information it needs - do this by providing a virt_to_pfn() macro. Another issue which makes this more complex is that __pv_phys_offset is a 64-bit type on all platforms. This is needlessly wasteful - if we store the physical offset as a PFN, we can save a lot of work having to deal with 64-bit values, which sometimes ends up producing incredibly horrid code: a4c: e3009000 movw r9, #0 a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset a50: R_ARM_MOVT_ABS __pv_phys_offset a54: e3002000 movw r2, #0 a54: R_ARM_MOVW_ABS_NC __pv_phys_offset a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset a58: R_ARM_MOVT_ABS __pv_phys_offset a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset a60: e3001000 movw r1, #0 a60: R_ARM_MOVW_ABS_NC mem_map a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPUThomas Petazzoni2014-02-211-1/+1
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, when the kernel is configured with LPAE support, but the CPU doesn't support it, the error message is fairly cryptic: Error: unrecognized/unsupported processor variant (0x561f5811). This messages is normally shown when there is an issue when comparing the processor ID (CP15 0, c0, c0) with the values/masks described in proc-v7.S. However, the same message is displayed when LPAE support is enabled in the kernel configuration, but not available in the CPU, after looking at ID_MMFR0 (CP15 0, c0, c1, 4). Having the same error message is highly misleading. This commit improves this by showing a different error message when this situation occurs: Error: Kernel with LPAE support, but CPU does not support LPAE. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: 7947/1: Make pgtbl macro more robustChristopher Covington2014-01-281-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pgtbl macro couldn't handle the specific (TEXT_OFFSET - PG_DIR_SIZE) value that the combination of MSM platforms and LPAE created: head.S:163: Error: invalid constant (203000) after fixup Regardless of whether this combination of configuration options will work on currently support platforms at run time, make it at least assemble properly. Signed-off-by: Christopher Covington <cov@codeaurora.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: fix asm/memory.h build errorRussell King2013-12-131-1/+1
|/ | | | | | | | | | | | | | | | | | | Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is not defined: In file included from arch/arm/include/asm/page.h:163:0, from include/linux/mm_types.h:16, from include/linux/sched.h:24, from arch/arm/kernel/asm-offsets.c:13: arch/arm/include/asm/memory.h: In function '__virt_to_phys': arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function) arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in arch/arm/include/asm/memory.h: In function '__phys_to_virt': arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function) Fixes: ca5a45c06cd4 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions") Tested-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7883/1: fix mov to mvn conversion in case of 64 bit phys_addr_t and BEVictor Kamensky2013-11-141-1/+5
| | | | | | | | | | | | | Fix patching code to convert mov instruction into mvn instruction in case of CONFIG_ARCH_PHYS_ADDR_T_64BIT and CONFIG_ARM_PATCH_PHYS_VIRT. In BE case store into r0 proper bits so byte swapped instruction could be modified correctly. Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Reviewed-by: R Sricharan <r.sricharan@ti.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7881/1: __fixup_smp read of SCU config should do byteswap in BE caseVictor Kamensky2013-11-141-0/+1
| | | | | | | | | | | | Commit "bc41b8724f24b9a27d1dcc6c974b8f686b38d554 ARM: 7846/1: Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices" added read of SCU config register into __fixup_smp function. Such read should be followed by byteswap, if kernel runs in BE mode. Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'devel-stable' into for-nextRussell King2013-11-121-16/+66
|\ | | | | | | | | | | | | Conflicts: arch/arm/include/asm/atomic.h arch/arm/include/asm/hardirq.h arch/arm/kernel/smp.c
| * Merge branch 'baserock/bjdooks/312-rc4/be/core-v3' of ↵Russell King2013-10-301-4/+22
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.baserock.org/delta/linux into devel-stable Conflicts: arch/arm/kernel/head.S This series has been well tested and it would be great to get this merged now. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: set BE8 if LE in head codeBen Dooks2013-10-191-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we are booting in LE and compiled for BE8, then add code to set the state to bE8. Since the instruction stream is always LE, we do not need to do anything special to the instruction. Also ensure that the secondary processors are started in the same mode. Note, we do add about 20 bytes to the kernel image, but it seems easier to do this than adding another configuration to change. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
| | * ARM: fixup_pv_table bug when CPU_ENDIAN_BE8Ben Dooks2013-10-191-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The fixup_pv_table assumes that the instructions are in the same endian configuration as the data, but when the CPU is running in BE8 the instructions stay in little-endian format. Make sure if CONFIG_CPU_ENDIAN_BE8 is set that we do all the alterations to the instructions taking in to account the LDR/STR will be swapping the data endian-ness. Since the code is only modifying a byte, we avoid dual-swapping the data, and just change the bits we clear and ORR in (in the case where the code is not thumb2). For thumb2, we add the necessary rev16 instructions to ensure that the instructions are processed in the correct format, as it was easier than re-writing the code to contain a mask and shift. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
| * | ARM: 7870/1: head: Fix the missing underscore in __ARMEB__ macro and .align ↵Sricharan R2013-10-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | keyword Commit 'f52bb722547f43caeaecbcc62db9f3c3b80ead9b' Author: Sricharan R <r.sricharan@ti.com> ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses introduced a __ARMEB__ macro usage in a new place, but missed the second underscore. So correcting it here. Also a explicit .align keyword is needed for the label with .long data-type to be aligned on the 4 byte boundary. Otherwise this can cause problem for thumb2 build. So adding it here. Signed-off-by: Sricharan R <r.sricharan@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: mm: Correct virt_to_phys patching for 64 bit physical addressesSricharan R2013-10-111-16/+47
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current phys_to_virt patching mechanism works only for 32 bit physical addresses and this patch extends the idea for 64bit physical addresses. The 64bit v2p patching mechanism patches the higher 8 bits of physical address with a constant using 'mov' instruction and lower 32bits are patched using 'add'. While this is correct, in those platforms where the lowmem addressable physical memory spawns across 4GB boundary, a carry bit can be produced as a result of addition of lower 32bits. This has to be taken in to account and added in to the upper. The patched __pv_offset and va are added in lower 32bits, where __pv_offset can be in two's complement form when PA_START < VA_START and that can result in a false carry bit. e.g 1) PA = 0x80000000; VA = 0xC0000000 __pv_offset = PA - VA = 0xC0000000 (2's complement) 2) PA = 0x2 80000000; VA = 0xC000000 __pv_offset = PA - VA = 0x1 C0000000 So adding __pv_offset + VA should never result in a true overflow for (1). So in order to differentiate between a true carry, a __pv_offset is extended to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is 2's complement. So 'mvn #0' is inserted instead of 'mov' while patching for the same reason. Since mov, add, sub instruction are to patched with different constants inside the same stub, the rotation field of the opcode is using to differentiate between them. So the above examples for v2p translation becomes for VA=0xC0000000, 1) PA[63:32] = 0xffffffff PA[31:0] = VA + 0xC0000000 --> results in a carry PA[63:32] = PA[63:32] + carry PA[63:0] = 0x0 80000000 2) PA[63:32] = 0x1 PA[31:0] = VA + 0xC0000000 --> results in a carry PA[63:32] = PA[63:32] + carry PA[63:0] = 0x2 80000000 The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as part of the review of first and second versions of the subject patch. There is no corresponding change on the phys_to_virt() side, because computations on the upper 32-bits would be discarded anyway. Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Sricharan R <r.sricharan@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
* / ARM: 7846/1: Update SMP_ON_UP code to detect A9MPCore with 1 CPU devicesSantosh Shilimkar2013-10-031-1/+20
|/ | | | | | | | | | | | | | | | | | | | | | | | | The generic code is well equipped to differentiate between SMP and UP configurations.However, there are some devices which use Cortex-A9 MP core IP with 1 CPU as configuration. To let these SOCs to co-exist in a CONFIG_SMP=y build by leveraging the SMP_ON_UP support, we need to additionally check the number the cores in Cortex-A9 MPCore configuration. Without such a check in place, the startup code tries to execute ALT_SMP() set of instructions which lead to CPU faults. The issue was spotted on TI's Aegis device and this patch makes now the device work with omap2plus_defconfig which enables SMP by default. The change is kept limited to only Cortex-A9 MPCore detection code. Note that if any future SoC *does* use 0x0 as the PERIPH_BASE, then the SCU address check code needs to be #ifdef'd for for the Aegis platform. Acked-by: Sricharan R <r.sricharan@ti.com> Signed-off-by: Vaibhav Bedia <vaibhav.bedia@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: Add .text annotations where required after __CPUINIT removalRussell King2013-08-011-0/+1
| | | | | | | | | Commit 8bd26e3a7 (arm: delete __cpuinit/__CPUINIT usage from all ARM users) caused some code to leak into sections which are discarded through the removal of __CPUINIT annotations. Add appropriate .text annotations to bring these back into the kernel text. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* arm: delete __cpuinit/__CPUINIT usage from all ARM usersPaul Gortmaker2013-07-151-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* ARM: LPAE: accomodate >32-bit addresses for page table baseCyril Chemparathy2013-05-301-6/+4
| | | | | | | | | | | | | This patch redefines the early boot time use of the R4 register to steal a few low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to 38-bit physical addresses. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: 7690/1: mm: fix CONFIG_LPAE typosPaul Bolle2013-04-031-1/+1
| | | | | | | | | | | | | | | | CONFIG_LPAE doesn't exist: the correct option is CONFIG_ARM_LPAE, so fix up the two typos under arch/arm/. The fix to head.S is slightly scary, but this is just for setting up an early io-mapping for the serial port when running on a big-endian, LPAE system. Since these systems don't exist in the wild (at least, I have no access to one outside of kvmtool, which doesn't provide a serial port suitable for earlyprintk), then we can revisit the code later if it causes any problems. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7657/1: head: fix swapper and idmap population with LPAE and big-endianWill Deacon2013-03-031-4/+22
| | | | | | | | | | | | | | | The LPAE page table format uses 64-bit descriptors, so we need to take endianness into account when populating the swapper and idmap tables during early initialisation. This patch ensures that we store the two words making up each page table entry in the correct order when running big-endian. Cc: <stable@vger.kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'for-rmk/virt/hyp-boot/fixes' of ↵Russell King2013-01-191-1/+1
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into fixes
| * ARM: virt: boot secondary CPUs through the right entry pointMarc Zyngier2013-01-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | Secondary CPUs should use the __hyp_stub_install_secondary entry point, so boot mode inconsistencies can be detected. Cc: <stable@vger.kernel.org> Acked-by: Dave Martin <dave.martin@linaro.org> Reported-by: Ian Molton <ian.molton@collabora.co.uk> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | ARM: 7628/1: head.S: map one extra section for the ATAG/DTB areaNicolas Pitre2013-01-161-0/+3
|/ | | | | | | | | | | | | | | | | We currently use a temporary 1MB section aligned to a 1MB boundary for mapping the provided device tree until the final page table is created. However, if the device tree happens to cross that 1MB boundary, the end of it remains unmapped and the kernel crashes when it attempts to access it. Given no restriction on the location of that DTB, it could end up with only a few bytes mapped at the end of a section. Solve this issue by mapping two consecutive sections. Signed-off-by: Nicolas Pitre <nico@linaro.org> Tested-by: Sascha Hauer <s.hauer@pengutronix.de> Tested-by: Tomasz Figa <t.figa@samsung.com> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'fixes' into for-linusRussell King2012-10-111-2/+2
|\ | | | | | | | | Conflicts: arch/arm/kernel/smp.c
| * ARM: move debug macros to common locationRob Herring2012-09-141-2/+2
| | | | | | | | | | | | | | | | Based on suggestion by Russell King, create a common location for debug macros and select the included debug macro file using config option. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk>
* | ARM: virt: allow the kernel to be entered in HYP modeDave Martin2012-09-191-3/+11
|/ | | | | | | | | | | | | | | | | | | | | This patch does two things: * Ensure that asynchronous aborts are masked at kernel entry. The bootloader should be masking these anyway, but this reduces the damage window just in case it doesn't. * Enter svc mode via exception return to ensure that CPU state is properly serialised. This does not matter when switching from an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C parlance), but it potentially does matter when switching from a another privileged mode such as hyp mode. This should allow the kernel to boot safely either from svc mode or hyp mode, even if no support for use of the ARM Virtualization Extensions is built into the kernel. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: 7439/1: head.S: simplify initial page table mappingNicolas Pitre2012-07-091-36/+23
| | | | | | | | | | Let's map the initial RAM up to the end of the kernel .bss instead of the strict kernel image area. This simplifies the code as the kernel image only needs to be handled specially in the XIP case. That covers the legacy ATAG location as well. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7363/1: DEBUG_LL: limit early mapping to the minimumNicolas Pitre2012-05-041-8/+1
| | | | | | | | | There is just no point mapping up to 512MB for a serial port. Using a single 1MB entry is way sufficient for all users. This will create less interference for the following debugging patch. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7338/1: add support for early console output via semihostingNicolas Pitre2012-03-241-4/+4
| | | | | | | | | | | | | | This is a very simple method for code running in an emulator, or under the supervision of a debugger, to use I/O facilities on the controlling host. Tested with OpenOCD, and ARM's Fast Models. Details on semihosting can be found in chapter 8 of DUI0203I_rvct_developer_guide.pdf from ARM Ltd. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: move CP15 definitions to separate header fileRussell King2012-03-241-1/+1
| | | | | | | | | | | | | | Avoid namespace conflicts with drivers over the CP15 definitions by moving CP15 related prototypes and definitions to a private header file. Acked-by: Stephen Warren <swarren@nvidia.com> Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra] Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx] Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7275/1: LPAE: Check the CPU support for the long descriptor formatCatalin Marinas2012-01-131-0/+8
| | | | | | | | | | This patch adds a check for the presence of the LPAE feature during the CPU initialisation. If not present, it reports an error when CONFIG_DEBUG_LL is enabled. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'for-rmk' of ↵Russell King2011-12-081-2/+45
|\ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux into devel-stable Conflicts: arch/arm/mm/ioremap.c
| * ARM: LPAE: MMU setup for the 3-level page table formatCatalin Marinas2011-12-081-2/+43
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds the MMU initialisation for the LPAE page table format. The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new proc-v7-3level.S file contains the TTB initialisation, context switch and PTE setting code with the LPAE. The TTBRx split is based on the PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings (supersections) and a few other memory types in mmu.c are conditionally compiled. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * ARM: LPAE: add ISBs around MMU enabling codeWill Deacon2011-12-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Before we enable the MMU, we must ensure that the TTBR registers contain sane values. After the MMU has been enabled, we jump to the *virtual* address of the following function, so we also need to ensure that the SCTLR write has taken effect. This patch adds ISB instructions around the SCTLR write to ensure the visibility of the above. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | Merge branch 'kexec/idmap' of ↵Russell King2011-12-061-8/+10
|\| | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| * ARM: SMP: use idmap_pgd for mapping MMU enable during secondary bootingWill Deacon2011-12-061-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM SMP booting code allocates a temporary set of page tables containing an identity mapping of the kernel image and provides this to secondary CPUs for initial booting. In reality, we only need to include the __turn_mmu_on function in the identity mapping since the rest of the kernel is executing from virtual addresses after this point. This patch adds __turn_mmu_on to the .idmap.text section, allowing the SMP booting code to use the idmap_pgd directly and not have to populate its own set of page table. As a result of this patch, we can make the identity_mapping_add function static (since it is only used within mm/idmap.c) and also remove the identity_mapping_del function. The identity map population is moved to an early initcall so that it is setup in time for secondary CPU bringup. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM: head.S: only include __turn_mmu_on in the initial identity mappingWill Deacon2011-12-061-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | __create_page_tables identity maps the region of memory from __enable_mmu to the end of __turn_mmu_on. In preparation for including __turn_mmu_on in the .idmap.text section, this patch modifies the identity mapping so that it only includes the __turn_mmu_on code. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | ARM: 7150/1: Allow kernel unaligned accesses on ARMv6+ processorsCatalin Marinas2011-11-081-1/+1
|/ | | | | | | | | | | | | Recent gcc versions generate unaligned accesses by default on ARMv6 and later processors. This patch ensures that the SCTLR.A bit is always cleared on such processors to avoid kernel traping before alignment_init() is called. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: John Linn <John.Linn@xilinx.com> Acked-by: Nicolas Pitre <nico@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'devel-stable' of ↵Linus Torvalds2011-10-281-2/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm * 'devel-stable' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (178 commits) ARM: 7139/1: fix compilation with CONFIG_ARM_ATAG_DTB_COMPAT and large TEXT_OFFSET ARM: gic, local timers: use the request_percpu_irq() interface ARM: gic: consolidate PPI handling ARM: switch from NO_MACH_MEMORY_H to NEED_MACH_MEMORY_H ARM: mach-s5p64x0: remove mach/memory.h ARM: mach-s3c64xx: remove mach/memory.h ARM: plat-mxc: remove mach/memory.h ARM: mach-prima2: remove mach/memory.h ARM: mach-zynq: remove mach/memory.h ARM: mach-bcmring: remove mach/memory.h ARM: mach-davinci: remove mach/memory.h ARM: mach-pxa: remove mach/memory.h ARM: mach-ixp4xx: remove mach/memory.h ARM: mach-h720x: remove mach/memory.h ARM: mach-vt8500: remove mach/memory.h ARM: mach-s5pc100: remove mach/memory.h ARM: mach-tegra: remove mach/memory.h ARM: plat-tcc: remove mach/memory.h ARM: mach-mmp: remove mach/memory.h ARM: mach-cns3xxx: remove mach/memory.h ... Fix up mostly pretty trivial conflicts in: - arch/arm/Kconfig - arch/arm/include/asm/localtimer.h - arch/arm/kernel/Makefile - arch/arm/mach-shmobile/board-ap4evb.c - arch/arm/mach-u300/core.c - arch/arm/mm/dma-mapping.c - arch/arm/mm/proc-v7.S - arch/arm/plat-omap/Kconfig largely due to some CONFIG option renaming (ie CONFIG_PM_SLEEP -> CONFIG_ARM_CPU_SUSPEND for the arm-specific suspend code etc) and addition of NEED_MACH_MEMORY_H next to HAVE_IDE.