summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include/asm/mmu.h
diff options
context:
space:
mode:
authorScott Wood <scottwood@freescale.com>2013-10-12 02:22:38 +0200
committerScott Wood <scottwood@freescale.com>2014-01-10 00:52:19 +0100
commit28efc35fe68dacbddc4b12c2fa8f2df1593a4ad3 (patch)
treef4565fcf8b9f1a905a0b3a0e977741092cba7921 /arch/powerpc/include/asm/mmu.h
parentpowerpc: add barrier after writing kernel PTE (diff)
downloadlinux-28efc35fe68dacbddc4b12c2fa8f2df1593a4ad3.tar.xz
linux-28efc35fe68dacbddc4b12c2fa8f2df1593a4ad3.zip
powerpc/e6500: TLB miss handler with hardware tablewalk support
There are a few things that make the existing hw tablewalk handlers unsuitable for e6500: - Indirect entries go in TLB1 (though the resulting direct entries go in TLB0). - It has threads, but no "tlbsrx." -- so we need a spinlock and a normal "tlbsx". Because we need this lock, hardware tablewalk is mandatory on e6500 unless we want to add spinlock+tlbsx to the normal bolted TLB miss handler. - TLB1 has no HES (nor next-victim hint) so we need software round robin (TODO: integrate this round robin data with hugetlb/KVM) - The existing tablewalk handlers map half of a page table at a time, because IBM hardware has a fixed 1MiB indirect page size. e6500 has variable size indirect entries, with a minimum of 2MiB. So we can't do the half-page indirect mapping, and even if we could it would be less efficient than mapping the full page. - Like on e5500, the linear mapping is bolted, so we don't need the overhead of supporting nested tlb misses. Note that hardware tablewalk does not work in rev1 of e6500. We do not expect to support e6500 rev1 in mainline Linux. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com>
Diffstat (limited to 'arch/powerpc/include/asm/mmu.h')
-rw-r--r--arch/powerpc/include/asm/mmu.h21
1 files changed, 11 insertions, 10 deletions
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 691fd8aca939..f8d1d6dcf7db 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -180,16 +180,17 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
#define MMU_PAGE_64K_AP 3 /* "Admixed pages" (hash64 only) */
#define MMU_PAGE_256K 4
#define MMU_PAGE_1M 5
-#define MMU_PAGE_4M 6
-#define MMU_PAGE_8M 7
-#define MMU_PAGE_16M 8
-#define MMU_PAGE_64M 9
-#define MMU_PAGE_256M 10
-#define MMU_PAGE_1G 11
-#define MMU_PAGE_16G 12
-#define MMU_PAGE_64G 13
-
-#define MMU_PAGE_COUNT 14
+#define MMU_PAGE_2M 6
+#define MMU_PAGE_4M 7
+#define MMU_PAGE_8M 8
+#define MMU_PAGE_16M 9
+#define MMU_PAGE_64M 10
+#define MMU_PAGE_256M 11
+#define MMU_PAGE_1G 12
+#define MMU_PAGE_16G 13
+#define MMU_PAGE_64G 14
+
+#define MMU_PAGE_COUNT 15
#if defined(CONFIG_PPC_STD_MMU_64)
/* 64-bit classic hash table MMU */