summaryrefslogtreecommitdiffstats
path: root/arch/arm/mm/proc-v7-3level.S (follow)
Commit message (Collapse)AuthorAgeFilesLines
* ARM: 9388/2: mm: Type-annotate all per-processor assembly routinesLinus Walleij2024-04-291-4/+4
| | | | | | | | | | | | | | | Type tag the remaining per-processor assembly using the CFI symbol macros, in addition to those that were previously tagged for cache maintenance calls. This will be used to finally provide proper C prototypes for all these calls as well so that CFI can be made to work. Tested-by: Kees Cook <keescook@chromium.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 333Thomas Gleixner2019-06-051-13/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 136 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190530000436.384967451@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* ARM: 8690/1: lpae: build TTB control register value from scratch in v7_ttb_setupHoeun Ryu2017-08-291-2/+1
| | | | | | | | | | | | | | Reading TTBCR in early boot stage might return the value of the previous kernel's configuration, especially in case of kexec. For example, if normal kernel (first kernel) had run on a configuration of PHYS_OFFSET <= PAGE_OFFSET and crash kernel (second kernel) is running on a configuration PHYS_OFFSET > PAGE_OFFSET, which can happen because it depends on the reserved area for crash kernel, reading TTBCR and using the value to OR other bit fields might be risky because it doesn't have a reset value for TTBCR. Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Hoeun Ryu <hoeun.ryu@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
* ARM: redo TTBR setup code for LPAERussell King2015-06-021-9/+5
| | | | | | | | | | | | | | Re-engineer the LPAE TTBR setup code. Rather than passing some shifted address in order to fit in a CPU register, pass either a full physical address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1). This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of cpu_set_ttbr() in the secondary CPU startup code path (which was there to re-set TTBR1 to the appropriate high physical address space on Keystone2.) Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8164/1: mm: clear SCTLR.HA instead of setting it for LPAEWill Deacon2014-09-251-2/+2
| | | | | | | | | | | | SCTLR.HA (hardware access flag) is deprecated and not actually implemented by any CPUs. Furthermore, it can confuse cr_alignment checks where the whole value of SCTLR is compared against the value sitting in the hardware, since the bit is actually RAZ/WI and will not match the saved cr_alignment value. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8132/1: LPAE: drop wrong carry flag correction after adding TTBR1_OFFSETKonstantin Khlebnikov2014-09-021-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | ARM: LPAE: drop wrong carry flag correction after adding TTBR1_OFFSET In commit 7fb00c2fca4b6c58be521eb3676cf4b4ba8dde3b ("ARM: 8114/1: LPAE: load upper bits of early TTBR0/TTBR1") part which fixes carrying in adding TTBR1_OFFSET to TTRR1 was wrong: addls ttbr1, ttbr1, #TTBR1_OFFSET adcls tmp, tmp, #0 addls doesn't update flags, adcls adds carry from cmp above: cmp ttbr1, tmp @ PHYS_OFFSET > PAGE_OFFSET? Condition 'ls' means carry flag is clear or zero flag is set, thus only one case is affected: when PHYS_OFFSET == PAGE_OFFSET. It seems safer to remove this fixup. Bug is here for ages and nobody complained. Let's fix it separately. Reported-and-Tested-by: Jassi Brar <jassisinghbrar@gmail.com> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8114/1: LPAE: load upper bits of early TTBR0/TTBR1Konstantin Khlebnikov2014-08-091-4/+3
| | | | | | | | | | | This patch fixes booting when idmap pgd lays above 4gb. Commit 4756dcbfd37 mostly had fixed this, but it'd failed to load upper bits. Also this fixes adding TTBR1_OFFSET to TTRR1: if lower part overflows carry flag must be added to the upper part. Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAESteven Capper2014-07-241-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For LPAE, we have the following means for encoding writable or dirty ptes: L_PTE_DIRTY L_PTE_RDONLY !pte_dirty && !pte_write 0 1 !pte_dirty && pte_write 0 1 pte_dirty && !pte_write 1 1 pte_dirty && pte_write 1 0 So we can't distinguish between writeable clean ptes and read only ptes. This can cause problems with ptes being incorrectly flagged as read only when they are writeable but not dirty. This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58, and adds additional logic to set AP[2] whenever the pte is read only or not dirty. That way we can distinguish between clean writeable ptes and read only ptes. HugeTLB pages will use this new logic automatically. We need to add some logic to Transparent HugePages to ensure that they correctly interpret the revised pgprot permissions (L_PTE_RDONLY has moved and no longer matches PMD_SECT_AP2). In the process of revising THP, the names of the PMD software bits have been prefixed with L_ to make them easier to distinguish from their hardware bit counterparts. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King2014-07-181-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: Shawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8037/1: mm: support big-endian page tablesJianguo Wu2014-04-251-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When enable LPAE and big-endian in a hisilicon board, while specify mem=384M mem=512M@7680M, will get bad page state: Freeing unused kernel memory: 180K (c0466000 - c0493000) BUG: Bad page state in process init pfn:fa442 page:c7749840 count:0 mapcount:-1 mapping: (null) index:0x0 page flags: 0x40000400(reserved) Modules linked in: CPU: 0 PID: 1 Comm: init Not tainted 3.10.27+ #66 [<c000f5f0>] (unwind_backtrace+0x0/0x11c) from [<c000cbc4>] (show_stack+0x10/0x14) [<c000cbc4>] (show_stack+0x10/0x14) from [<c009e448>] (bad_page+0xd4/0x104) [<c009e448>] (bad_page+0xd4/0x104) from [<c009e520>] (free_pages_prepare+0xa8/0x14c) [<c009e520>] (free_pages_prepare+0xa8/0x14c) from [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) from [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) from [<c00b6458>] (handle_mm_fault+0xf4/0x120) [<c00b6458>] (handle_mm_fault+0xf4/0x120) from [<c0013754>] (do_page_fault+0xfc/0x354) [<c0013754>] (do_page_fault+0xfc/0x354) from [<c0008400>] (do_DataAbort+0x2c/0x90) [<c0008400>] (do_DataAbort+0x2c/0x90) from [<c0008fb4>] (__dabt_usr+0x34/0x40) The bad pfn:fa442 is not system memory(mem=384M mem=512M@7680M), after debugging, I find in page fault handler, will get wrong pfn from pte just after set pte, as follow: do_anonymous_page() { ... set_pte_at(mm, address, page_table, entry); //debug code pfn = pte_pfn(entry); pr_info("pfn:0x%lx, pte:0x%llxn", pfn, pte_val(entry)); //read out the pte just set new_pte = pte_offset_map(pmd, address); new_pfn = pte_pfn(*new_pte); pr_info("new pfn:0x%lx, new pte:0x%llxn", pfn, pte_val(entry)); ... } pfn: 0x1fa4f5, pte:0xc00001fa4f575f new_pfn:0xfa4f5, new_pte:0xc00000fa4f5f5f //new pfn/pte is wrong. The bug is happened in cpu_v7_set_pte_ext(ptep, pte): An LPAE PTE is a 64bit quantity, passed to cpu_v7_set_pte_ext in the r2 and r3 registers. On an LE kernel, r2 contains the LSB of the PTE, and r3 the MSB. On a BE kernel, the assignment is reversed. Unfortunately, the current code always assumes the LE case, leading to corruption of the PTE when clearing/setting bits. This patch fixes this issue much like it has been done already in the cpu_v7_switch_mm case. CC stable <stable@vger.kernel.org> Signed-off-by: Jianguo Wu <wujianguo@huawei.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7784/1: mm: ensure SMP alternates assemble to exactly 4 bytes with Thumb-2Will Deacon2013-07-221-1/+1
| | | | | | | | | | | | | | | | | Commit ae8a8b9553bd ("ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead") added early function returns for page table cache flushing operations on ARMv7 SMP CPUs. Unfortunately, when targetting Thumb-2, these `mov pc, lr' sequences assemble to 2 bytes which can lead to corruption of the instruction stream after code patching. This patch fixes the alternates to use wide (32-bit) instructions for Thumb-2, therefore ensuring that the patching code works correctly. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* arm: delete __cpuinit/__CPUINIT usage from all ARM usersPaul Gortmaker2013-07-151-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* ARM: LPAE: accomodate >32-bit addresses for page table baseCyril Chemparathy2013-05-301-0/+8
| | | | | | | | | | | | | This patch redefines the early boot time use of the R4 register to steal a few low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to 38-bit physical addresses. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: LPAE: factor out T1SZ and TTBR1 computationsCyril Chemparathy2013-05-301-21/+8
| | | | | | | | | | | | | | This patch moves the TTBR1 offset calculation and the T1SZ calculation out of the TTB setup assembly code. This should not affect functionality in any way, but improves code readability as well as readability of subsequent patches in this series. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: LPAE: use phys_addr_t in switch_mm()Cyril Chemparathy2013-05-301-4/+12
| | | | | | | | | | | | | | This patch modifies the switch_mm() processor functions to use phys_addr_t. On LPAE systems, we now honor the upper 32-bits of the physical address that is being passed in, and program these into TTBR as expected. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> [will: fixed up conflict in 3-level switch_mm with big-endian changes] Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP insteadWill Deacon2013-04-031-1/+2
| | | | | | | | | | | | | | | | | Many ARMv7 cores have hardware page table walkers that can read the L1 cache. This is discoverable from the ID_MMFR3 register, although this can be expensive to access from the low-level set_pte functions and is a pain to cache, particularly with multi-cluster systems. A useful observation is that the multi-processing extensions for ARMv7 require coherent table walks, meaning that we can make use of ALT_SMP patching in proc-v7-* to patch away the cache flush safely for these cores. Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7652/1: mm: fix missing use of 'asid' to get asid value from mm->context.idBen Dooks2013-03-031-1/+1
| | | | | | | Fix missing use of the asid macro when getting the ASID from the mm->context.id field. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7650/1: mm: replace direct access to mm->context.id with new macroBen Dooks2013-02-161-1/+1
| | | | | | | | | | The mmid macro is meant to be used to get the mm->context.id data from the mm structure, but it seems to have been missed in a cuple of files. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: mm: introduce present, faulting entries for PAGE_NONEWill Deacon2012-11-091-0/+3
| | | | | | | | | | | | | | PROT_NONE mappings apply the page protection attributes defined by _P000 which translate to PAGE_NONE for ARM. These attributes specify an XN, RDONLY pte that is inaccessible to userspace. However, on kernels configured without support for domains, such a pte *is* accessible to the kernel and can be read via get_user, allowing tasks to read PROT_NONE pages via syscalls such as read/write over a pipe. This patch introduces a new software pte flag, L_PTE_NONE, that is set to identify faulting, present entries. Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: mm: introduce L_PTE_VALID for page table entriesWill Deacon2012-11-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | For long-descriptor translation table formats, the ARMv7 architecture defines the last two bits of the second- and third-level descriptors to be: x0b - Invalid 01b - Block (second-level), Reserved (third-level) 11b - Table (second-level), Page (third-level) This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to create ptes directly. However, when determining whether a given pte value is present in the low-level page table accessors, we only need to check the least significant bit of the descriptor, allowing us to write faulting, present entries which are required for PROT_NONE mappings. This patch introduces L_PTE_VALID, which can be used to test whether a pte should fault, and updates the low-level page table accessors accordingly. Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: LPAE: MMU setup for the 3-level page table formatCatalin Marinas2011-12-081-0/+150
This patch adds the MMU initialisation for the LPAE page table format. The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new proc-v7-3level.S file contains the TTB initialisation, context switch and PTE setting code with the LPAE. The TTBRx split is based on the PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings (supersections) and a few other memory types in mmu.c are conditionally compiled. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>