summaryrefslogtreecommitdiffstats
path: root/arch (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'devel-stable' into for-linusRussell King2012-01-05425-4049/+3243
|\ | | | | | | | | | | Conflicts: arch/arm/kernel/setup.c arch/arm/mach-shmobile/board-kota2.c
| * ARM: 7269/1: mach-sa1100: fix sched_clock breakageLinus Walleij2012-01-051-1/+1
| | | | | | | | | | | | | | | | | | | | Fixed up a simple typo in the runtime sched_clock conversion so we compile again. Cc: Kristoffer Ericson <kristoffer.ericson@gmail.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * Merge branch 'vmalloc' of git://git.linaro.org/people/nico/linux into ↵Russell King2012-01-046-17/+0
| |\ | | | | | | | | | devel-stable
| | * Revert "ARM: move VMALLOC_END down temporarily for shmobile"Nicolas Pitre2012-01-031-7/+0
| | | | | | | | | | | | | | | | | | | | | This reverts commit 0af362f8440a78b970d5f215e234420fa87d0f3f as shmobile is not using a non-standard memory layout anymore. Signed-off-by: Nicolas Pitre <nico@linaro.org>
| | * ARM: mach-shmobile: use standard 2MiB coherent DMA memory sizeMagnus Damm2012-01-035-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 158MiB memory area was used to support HD resolution multimedia workloads using the same legacy memory allocating solution as on SH. There are no in-tree kernel dependencies on the 158MiB setting, and future development should reserve and allocate memory using some other method like for instance CMA. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Nicolas Pitre <nico@linaro.org>
| * | ARM: 7236/1: vic: always use simple opsJamie Iles2012-01-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that irq_domain_simple_ops are available for non-DT users, use them in the VIC driver so that we don't get a NULL dereference in irq_domain_to_irq() when registering the domain. Cc: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | Merge branch 'arm/common-kconfig-refactor+for-rmk' of ↵Russell King2011-12-199-12/+56
| |\ \ | | | | | | | | | | | | git://git.linaro.org/people/dmart/linux-2.6-arm into devel-stable
| | * | imx6q: Remove unconditional dependency on l2x0 L2 cache supportDave Martin2011-12-191-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The i.MX6 Quad SoC will work without the l2x0 L2 cache controller support built into the kernel, so this patch removes the dependency on CACHE_L2X0. This makes the l2x0 support optional, so that it can be turned off when desired for debugging purposes etc. Since SOC_IMX6Q already depends on ARCH_IMX_V6_V7 and ARCH_IMX_V6_V7 selects MIGHT_HAVE_CACHE_L2X0, there is no need to select that option explicitly from SOC_IMX6Q. Thanks to Shawn Guo for this suggestion. [1] [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2011-November/074602.html Acked-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Shawn Guo <shawn.guo@linaro.org> Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
| | * | highbank: Unconditionally require l2x0 L2 cache controller supportDave Martin2011-12-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If running in the Normal World on a TrustZone-enabled SoC, Linux does not have complete control over the L2 cache controller configuration. The kernel cannot work reliably on such platforms without the l2x0 cache support code built in. This patch unconditionally enables l2x0 support for the Highbank SoC. Thanks to Rob Herring for this suggestion. [1] [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2011-November/074495.html Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Rob Herring <rob.herring@calxeda.com>
| | * | omap4: Unconditionally require l2x0 L2 cache controller supportDave Martin2011-12-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If running in the Normal World on a TrustZone-enabled SoC, Linux does not have complete control over the L2 cache controller configuration. The kernel cannot work reliably on such platforms without the l2x0 cache support code built in. This patch unconditionally enables l2x0 support for the OMAP4 SoCs. Thanks to Rob Herring for this suggestion. [1] [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2011-November/074495.html Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Tony Lindgren <tony@atomide.com>
| | * | ARM: SMP: Refactor Kconfig to be more maintainableDave Martin2011-12-197-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Making SMP depend on (huge list of MACH_ and ARCH_ configs) is bothersome to maintain and likely to lead to merge conflicts. This patch moves the knowledge of which platforms are SMP-capable to the individual machines. To enable this, a new HAVE_SMP config option is introduced to allow machines to indicate that they can run in a SMP configuration. Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> (for nomadik, ux500) Acked-by: Tony Lindgren <tony@atomide.com> (for omap) Acked-by: Kukjin Kim <kgene.kim@samsung.com> (for exynos) Acked-by: Sascha Hauer <s.hauer@pengutronix.de> (for imx) Acked-by: Olof Johansson <olof@lixom.net> (for tegra)
| | * | ARM: l2x0/pl310: Refactor Kconfig to be more maintainableDave Martin2011-12-197-7/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Making CACHE_L2X0 depend on (huge list of MACH_ and ARCH_ configs) is bothersome to maintain and likely to lead to merge conflicts. This patch moves the knowledge of which platforms have a L2x0 or PL310 cache controller to the individual machines. To enable this, a new MIGHT_HAVE_CACHE_L2X0 config option is introduced to allow machines to indicate that they may have such a cache controller independently of each other. Boards/SoCs which cannot reliably operate without the L2 cache controller support will need to select CACHE_L2X0 directly from their own Kconfigs instead. This applies to some TrustZone-enabled boards where Linux runs in the Normal World, for example. Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Anton Vorontsov <cbouatmailru@gmail.com> (for cns3xxx) Acked-by: Tony Lindgren <tony@atomide.com> (for omap) Acked-by: Shawn Guo <shawn.guo@linaro.org> (for imx) Acked-by: Kukjin Kim <kgene.kim@samsung.com> (for exynos) Acked-by: Sascha Hauer <s.hauer@pengutronix.de> (for imx) Acked-by: Olof Johansson <olof@lixom.net> (for tegra)
| * | | ARM: 7233/1: ux500: remove overlapping iotable entriesLinus Walleij2011-12-192-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The overlapping iotable mapping entries for the ux500 Cortex A9 SCU, CPU control and TWD are no longer accepted by the kernel. Remove the overlaps so the machine boots again. Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Cc: Rabin Vincent <rabin.vincent@stericsson.com> Reported-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | ARM: 7205/2: sched_clock: allow sched_clock to be selected at runtimeMarc Zyngier2011-12-1919-435/+161
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sched_clock() is yet another blocker on the road to the single image. This patch implements an idea by Russell King: http://www.spinics.net/lists/linux-omap/msg49561.html Instead of asking the platform to implement both sched_clock() itself and the rollover callback, simply register a read() function, and let the ARM code care about sched_clock() itself, the conversion to ns and the rollover. sched_clock() uses this read() function as an indirection to the platform code. If the platform doesn't provide a read(), the code falls back to the jiffy counter (just like the default sched_clock). This allow some simplifications and possibly some footprint gain when multiple platforms are compiled in. Among the drawbacks, the removal of the *_fixed_sched_clock optimization which could negatively impact some platforms (sa1100, tegra, versatile and omap). Tested on 11MPCore, OMAP4 and Tegra. Cc: Imre Kaloz <kaloz@openwrt.org> Cc: Eric Miao <eric.y.miao@gmail.com> Cc: Colin Cross <ccross@android.com> Cc: Erik Gilling <konkers@android.com> Cc: Olof Johansson <olof@lixom.net> Cc: Sascha Hauer <kernel@pengutronix.de> Cc: Alessandro Rubini <rubini@unipv.it> Cc: STEricsson <STEricsson_nomadik_linux@list.st.com> Cc: Lennert Buytenhek <kernel@wantstofly.org> Cc: Ben Dooks <ben-linux@fluff.org> Tested-by: Jamie Iles <jamie@jamieiles.com> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Krzysztof Halasa <khc@pm.waw.pl> Acked-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | ARM: kexec: use soft_restart for branching to the reboot bufferWill Deacon2011-12-121-12/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that there is a common way to reset the machine, let's use it instead of reinventing the wheel in the kexec backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: stop: execute platform callback from cpu_stop codeWill Deacon2011-12-122-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sending IPI_CPU_STOP to a CPU causes it to execute a busy cpu_relax loop forever. This makes it impossible to kexec successfully on an SMP system since the secondary CPUs do not reset. This patch adds a callback to platform_cpu_kill, defined when CONFIG_HOTPLUG_CPU=y, from the ipi_cpu_stop handling code. This function currently just returns 1 on all platforms that define it but allows them to do something more sophisticated in the future. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: reset: implement soft_restart for jumping to a physical addressWill Deacon2011-12-121-10/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tools such as kexec and CPU hotplug require a way to reset the processor and branch to some code in physical space. This requires various bits of jiggery pokery with the caches and MMU which, when it goes wrong, tends to lock up the system. This patch fleshes out the soft_restart implementation so that it branches to the reset code using the identity mapping. This requires us to change to a temporary stack, held within the kernel image as a static array, to avoid conflicting with the new view of memory. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: lib: add call_with_stack function for safely changing stackWill Deacon2011-12-122-1/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When disabling the MMU, it is necessary to take out a 1:1 identity map of the reset code so that it can safely be executed with and without the MMU active. To avoid the situation where the physical address of the reset code aliases with the virtual address of the active stack (which cannot be included in the 1:1 mapping), it is desirable to change to a new stack at a location which is less likely to alias. This code adds a new lib function, call_with_stack: void call_with_stack(void (*fn)(void *), void *arg, void *sp); which changes the stack to point at the sp parameter, before invoking fn(arg) with the new stack selected. Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: 7183/1: vic: register the VIC for ST-modified VIC'sJamie Iles2011-12-111-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When probing the VIC, the ST variant has a different probing method to account for the extra interrupts which meant we didn't previously call vic_register() which registered the irq_domain. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | Merge branch 'for-rmk' of ↵Russell King2011-12-0832-322/+1199
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux into devel-stable Conflicts: arch/arm/mm/ioremap.c
| | * | | ARM: LPAE: Add the Kconfig entriesCatalin Marinas2011-12-082-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the ARM_LPAE and ARCH_PHYS_ADDR_T_64BIT Kconfig entries allowing LPAE support to be compiled into the kernel. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: mark memory banks with start > ULONG_MAX as highmemWill Deacon2011-12-081-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Memory banks living outside of the 32-bit physical address space do not have a 1:1 pa <-> va mapping and therefore the __va macro may wrap. This patch ensures that such banks are marked as highmem so that the Kernel doesn't try to split them up when it sees that the wrapped virtual address overlaps the vmalloc space. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| | * | | ARM: LPAE: Add identity mapping support for the 3-level page table formatCatalin Marinas2011-12-081-1/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With LPAE, the pgd is a separate page table with entries pointing to the pmd. The identity_mapping_add() function needs to ensure that the pgd is populated before populating the pmd level. The do..while blocks now loop over the pmd in order to have the same implementation for the two page table formats. The pmd_addr_end() definition has been removed and the generic one used instead. The pmd clean-up is done in the pgd_free() function. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Add context switching supportCatalin Marinas2011-12-081-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With LPAE, TTBRx registers are 64-bit. The ASID is stored in TTBR0 rather than a separate Context ID register. This patch makes the necessary changes to handle context switching on LPAE. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Add fault handling supportCatalin Marinas2011-12-086-5/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The DFSR and IFSR register format is different when LPAE is enabled. In addition, DFSR and IFSR have similar definitions for the fault type. This modifies the fault code to correctly handle the new format. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Invalidate the TLB before freeing the PMDCatalin Marinas2011-12-081-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to the PTE freeing, this patch introduced __pmd_free_tlb() which invalidates the TLB before freeing a PMD page. This is needed because on newer processors the entry in the upper page table may be cached by the TLB and point to random data after the PMD has been freed. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: MMU setup for the 3-level page table formatCatalin Marinas2011-12-085-12/+243
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the MMU initialisation for the LPAE page table format. The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new proc-v7-3level.S file contains the TTB initialisation, context switch and PTE setting code with the LPAE. The TTBRx split is based on the PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings (supersections) and a few other memory types in mmu.c are conditionally compiled. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Page table maintenance for the 3-level formatCatalin Marinas2011-12-085-7/+150
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch modifies the pgd/pmd/pte manipulation functions to support the 3-level page table format. Since there is no need for an 'ext' argument to cpu_set_pte_ext(), this patch conditionally defines a different prototype for this function when CONFIG_ARM_LPAE. The patch also introduces the L_PGD_SWAPPER flag to mark pgd entries pointing to pmd tables pre-allocated in the swapper_pg_dir and avoid trying to free them at run-time. This flag is 0 with the classic page table format. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Introduce the 3-level page table format definitionsCatalin Marinas2011-12-086-0/+261
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the pgtable-3level*.h files with definitions specific to the LPAE page table format (3 levels of page tables). Each table is 4KB and has 512 64-bit entries. An entry can point to a 40-bit physical address. The young, write and exec software bits share the corresponding hardware bits (negated). Other software bits use spare bits in the PTE. The patch also changes some variable types from unsigned long or int to pteval_t or pgprot_t. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: add ISBs around MMU enabling codeWill Deacon2011-12-084-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before we enable the MMU, we must ensure that the TTBR registers contain sane values. After the MMU has been enabled, we jump to the *virtual* address of the following function, so we also need to ensure that the SCTLR write has taken effect. This patch adds ISB instructions around the SCTLR write to ensure the visibility of the above. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Factor out classic-MMU specific code into proc-v7-2level.SCatalin Marinas2011-12-082-149/+174
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch modifies the proc-v7.S file so that it only contains code shared between classic MMU and LPAE. The non-common code is factored out into a separate file. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Move the FSR definitions to separate filesCatalin Marinas2011-12-083-93/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The FSR structure is different with LPAE and this patch moves the classic MMU specific definition to a separate fsr-2level.c file that is included in fault.c. It also moves the fsr_fs and FSR bits to the fault.h file. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: LPAE: Move page table maintenance macros to pgtable-2level.hCatalin Marinas2011-12-082-38/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The page table maintenance macros need to be duplicated between the classic and the LPAE MMU so this patch moves those that are not common to the pgtable-2level.h file. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: pgtable: switch to use pgtable-nopud.hRussell King2011-12-083-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nick Piggin noted upon introducing 4level-fixup.h: | Add a temporary "fallback" header so architectures can run with | the 4level pagetables patch without modification. All architectures | should be converted to use the folding headers (include/asm-generic/ | pgtable-nop?d.h) as soon as possible, and the fallback header removed. This makes ARM compliant with this statement. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| | * | | ARM: pgtable: Fix compiler warning in ioremap.c introduced by nopudCatalin Marinas2011-12-081-12/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the arch/arm code conversion to pgtable-nopud.h, the section and supersection (un|re)map code triggers compiler warnings on UP systems. This is caused by pmd_offset() being given a pgd_t argument rather than a pud_t one. This patch makes the necessary conversion with the assumption that the pud is folded into the pgd. The page table setting code only loops over the pmd which is enough with the classic page tables. This code is not compiled when LPAE is enabled. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | Merge branch 'kexec/idmap' of ↵Russell King2011-12-0632-89/+140
| |\| | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| | * | | ARM: SMP: use idmap_pgd for mapping MMU enable during secondary bootingWill Deacon2011-12-064-66/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM SMP booting code allocates a temporary set of page tables containing an identity mapping of the kernel image and provides this to secondary CPUs for initial booting. In reality, we only need to include the __turn_mmu_on function in the identity mapping since the rest of the kernel is executing from virtual addresses after this point. This patch adds __turn_mmu_on to the .idmap.text section, allowing the SMP booting code to use the idmap_pgd directly and not have to populate its own set of page table. As a result of this patch, we can make the identity_mapping_add function static (since it is only used within mm/idmap.c) and also remove the identity_mapping_del function. The identity map population is moved to an early initcall so that it is setup in time for secondary CPU bringup. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: head.S: only include __turn_mmu_on in the initial identity mappingWill Deacon2011-12-061-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __create_page_tables identity maps the region of memory from __enable_mmu to the end of __turn_mmu_on. In preparation for including __turn_mmu_on in the .idmap.text section, this patch modifies the identity mapping so that it only includes the __turn_mmu_on code. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: idmap: use idmap_pgd when setting up mm for rebootWill Deacon2011-12-061-9/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For soft-rebooting a system, it is necessary to map the MMU-off code with an identity mapping so that execution can continue safely once the MMU has been switched off. Currently, switch_mm_for_reboot takes out a 1:1 mapping from 0x0 to TASK_SIZE during reboot in the hope that the reset code lives at a physical address corresponding to a userspace virtual address. This patch modifies the code so that we switch to the idmap_pgd tables, which contain a 1:1 mapping of the cpu_reset code. This has the advantage of only remapping the code that we need and also means we don't need to worry about allocating a pgd from an atomic context in the case that the physical address of the cpu_reset code aliases with the virtual space used by the kernel. Acked-by: Dave Martin <dave.martin@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: proc-*.S: place cpu_reset functions into .idmap.text sectionWill Deacon2011-12-0624-0/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CPU reset functions disable the MMU and therefore must be executed with an identity mapping in place. This patch places the CPU reset functions into the .idmap.text section, causing the idmap code to include them as part of the identity mapping. Acked-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: suspend: use idmap_pgd instead of suspend_pgdWill Deacon2011-12-062-15/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM CPU suspend code requires cpu_resume_mmu to be identity mapped in order to re-enable the MMU when coming out of suspend. Currently, this is accomplished by maintaining a suspend_pgd with the relevant mapping put in place at init time. This patch replaces the use of suspend_pgd with the new idmap_pgd. cpu_resume_mmu is placed in the .idmap.text section so that it is included in the identity map. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Dave Martin <dave.martin@linaro.org> Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: idmap: populate identity map pgd at init time using .init.textWill Deacon2011-12-065-3/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When disabling and re-enabling the MMU, it is necessary to take out an identity mapping for the code that manipulates the SCTLR in order to avoid it disappearing from under our feet. This is useful when soft rebooting and returning from CPU suspend. This patch allocates a set of page tables during boot and populates them with an identity mapping for the .idmap.text section. This means that users of the identity map do not need to manage their own pgd and can instead annotate their functions with __idmap or, in the case of assembly code, place them in the correct section. Acked-by: Dave Martin <dave.martin@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | | ARM: 7194/1: OMAP: Fix build after a merge between v3.2-rc4 and ARM restart ↵Tony Lindgren2011-12-061-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | changes ARM restart changes needed changes to common.h to make it local. This conflicted with v3.2-rc4 DSS related hwmod changes that git mergetool was not able to handle. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: 7192/1: OMAP: Fix build error for omap1_defconfigTony Lindgren2011-12-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Otherwise we get the following error: In function 'omap_init_consistent_dma_size': error: implicit declaration of function 'init_consistent_dma_size' Signed-off-by: Tony Lindgren <tony@atomide.com> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | Merge branch 'vmalloc' of git://git.linaro.org/people/nico/linux into ↵Russell King2011-12-0693-1398/+146
| |\ \ \ \ | | | |_|/ | | |/| | | | | | | devel-stable
| | * | | ARM: move VMALLOC_END down temporarily for shmobileNicolas Pitre2011-11-271-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | THIS IS A TEMPORARY HACK. The purpose of this is _only_ to avoid a regression on an existing machine while a better fix is implemented. On shmobile the consistent DMA memory area was set to 158MB in commit 28f0721a79 with no explanation. The documented size for this area should vary between 2MB and 14MB, and none of the other ARM targets exceed that. The included #warning is therefore meant to be noisy on purpose to get shmobile maintainers attention and this commit reverted once this consistent DMA size conflict is resolved. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Magnus Damm <damm@opensource.se> Cc: Paul Mundt <lethal@linux-sh.org>
| | * | | ARM: big removal of now unused vmalloc.h filesNicolas Pitre2011-11-2759-866/+0
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
| | * | | ARM: add generic ioremap optimization by reusing static mappingsNicolas Pitre2011-11-273-25/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have all the static mappings from iotable_init() located in the vmalloc area, it is trivial to optimize ioremap by reusing those static mappings when the requested physical area fits in one of them, and so in a generic way for all platforms. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Warren <swarren@nvidia.com> Tested-by: Kevin Hilman <khilman@ti.com> Tested-by: Jamie Iles <jamie@jamieiles.com>
| | * | | ARM: simplify __iounmap() when dealing with section based mappingNicolas Pitre2011-11-271-11/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Firstly, there is no need to have a double pointer here as we're only walking the vmlist and not modifying it. Secondly, for the same reason, we don't need a write lock but only a read lock here, since the lock only protects the coherency of the list nothing else. Lastly, the reason for holding a lock is not what the comment says, so let's remove that misleading piece of information. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
| | * | | ARM: move iotable mappings within the vmalloc regionNicolas Pitre2011-11-272-21/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to remove the build time variation between different SOCs with regards to VMALLOC_END, the iotable mappings are now allocated inside the vmalloc region. This allows for VMALLOC_END to be identical across all machines. The value for VMALLOC_END is now set to 0xff000000 which is right where the consistent DMA area starts. To accommodate all static mappings on machines with possible highmem usage, the default vmalloc area size is changed to 240 MB so that VMALLOC_START is no higher than 0xf0000000 by default. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Warren <swarren@nvidia.com> Tested-by: Kevin Hilman <khilman@ti.com> Tested-by: Jamie Iles <jamie@jamieiles.com>