summaryrefslogtreecommitdiffstats
path: root/arch/i386/mm (follow)
Commit message (Collapse)AuthorAgeFilesLines
* i386: move mmThomas Gleixner2007-10-1113-3602/+0
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/MakefileThomas Gleixner2007-10-112-10/+15
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/fault.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/pageattr.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/highmem.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/pgtable.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/extable.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/boot_ioremap.cThomas Gleixner2007-10-111-0/+0
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/mmap.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/init.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/discontig.cThomas Gleixner2007-10-112-1/+1
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: prepare shared mm/ioremap.cThomas Gleixner2007-10-112-2/+2
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* revert "highmem: catch illegal nesting"Andrew Morton2007-09-121-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Revert commit 656dad312fb41ed95ef08325e9df9bece3aacbbb Author: Ingo Molnar <mingo@elte.hu> Date: Sat Feb 10 01:46:36 2007 -0800 [PATCH] highmem: catch illegal nesting Catch illegally nested kmap_atomic()s even if the page that is mapped by the 'inner' instance is from lowmem. This avoids spuriously zapped kmap-atomic ptes and turns hard to find crashes into clear asserts at the bug site. Problem is, a get_zeroed_page(GFP_KERNEL) from interrupt context will trigger this check if non-irq code on this CPU holds a KM_USER0 mapping. But that get_zeroed_page() will never be altering the kmap slot anyway due to the GFP_KERNEL. Cc: Christoph Lameter <clameter@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* hugepage: fix broken check for offset alignment in hugepage mappingsDavid Gibson2007-08-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | For hugepage mappings, the file offset, like the address and size, needs to be aligned to the size of a hugepage. In commit 68589bc353037f233fe510ad9ff432338c95db66, the check for this was moved into prepare_hugepage_range() along with the address and size checks. But since BenH's rework of the get_unmapped_area() paths leading up to commit 4b1d89290b62bb2db476c94c82cf7442aab440c8, prepare_hugepage_range() is only called for MAP_FIXED mappings, not for other mappings. This means we're no longer ever checking for an aligned offset - I've confirmed that mmap() will (apparently) succeed with a misaligned offset on both powerpc and i386 at least. This patch restores the check, removing it from prepare_hugepage_range() and putting it back into hugetlbfs_file_mmap(). I'm putting it there, rather than in the get_unmapped_area() path so it only needs to go in one place, than separately in the half-dozen or so arch-specific implementations of hugetlb_get_unmapped_area(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Cc: Adam Litke <agl@us.ibm.com> Cc: Andi Kleen <ak@suse.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Fix lazy mode vmalloc synchronization for paravirtZachary Amsden2007-08-221-2/+3
| | | | | | | | | | Touching vmalloc memory in the middle of a lazy mode update can generate a kernel PDE update, which must be flushed immediately. The fix is to leave lazy mode when doing a vmalloc sync. Signed-off-by: Zachary Amsden <zach@vmware.com> Acked-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: Disable CLFLUSH support againAndi Kleen2007-08-121-1/+1
| | | | | | | | | It turns out CLFLUSH support is still not complete; we flush the wrong pages. Again disable it for the release. Noticed by Jan Beulich who then also noticed a stupid typo later. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Replace CONFIG_SOFTWARE_SUSPEND with CONFIG_HIBERNATIONRafael J. Wysocki2007-07-301-1/+1
| | | | | | | | | Replace CONFIG_SOFTWARE_SUSPEND with CONFIG_HIBERNATION to avoid confusion (among other things, with CONFIG_SUSPEND introduced in the next patch). Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Revert most of "x86: Fix alternatives and kprobes to remap write-protected ↵Linus Torvalds2007-07-261-3/+11
| | | | | | | | | | | | | | | | | | kernel text" This reverts most of commit 19d36ccdc34f5ed444f8a6af0cbfdb6790eb1177. The way to DEBUG_RODATA interactions with KPROBES and CPU hotplug is to just not mark the text as being write-protected in the first place. Both of those facilities depend on rewriting instructions. Having "helpful" debug facilities that just cause more problem is not being helpful. It just adds complexity and bugs. Not worth it. Reported-by: Rafael J. Wysocki <rjw@sisk.pl> Cc: Andi Kleen <ak@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ACPI: Kconfig: remove CONFIG_ACPI_SLEEP from sourceLen Brown2007-07-251-1/+1
| | | | | | | | | | As it was a synonym for (CONFIG_ACPI && CONFIG_X86), the ifdefs for it were more clutter than they were worth. For ia64, just add a few stubs in anticipation of future S3 or S4 support. Signed-off-by: Len Brown <len.brown@intel.com>
* x86: Fix alternatives and kprobes to remap write-protected kernel textAndi Kleen2007-07-221-11/+3
| | | | | | | | | | | | | | | | | | | | | | | | | Reenable kprobes and alternative patching when the kernel text is write protected by DEBUG_RODATA Add a general utility function to change write protected text. The new function remaps the code using vmap to write it and takes care of CPU synchronization. It also does CLFLUSH to make icache recovery faster. There are some limitations on when the function can be used, see the comment. This is a newer version that also changes the paravirt_ops code. text_poke also supports multi byte patching now. Contains bug fixes from Zach Amsden and suggestions from Mathieu Desnoyers. Cc: Jan Beulich <jbeulich@novell.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org> Cc: Zach Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: i386-show-unhandled-signals-v3Masoud Asgharifard Sharbiani2007-07-221-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes the i386 behave the same way that x86_64 does when a segfault happens. A line gets printed to the kernel log so that tools that need to check for failures can behave more uniformly between debug.show_unhandled_signals sysctl variable to 0 (or by doing echo 0 > /proc/sys/debug/exception-trace) Also, all of the lines being printed are now using printk_ratelimit() to deny the ability of DoS from a local user with a program like the following: main() { while (1) if (!fork()) *(int *)0 = 0; } This new revision also includes the fix that Andrew did which got rid of new sysctl that was added to the system in earlier versions of this. Also, 'show-unhandled-signals' sysctl has been renamed back to the old 'exception-trace' to avoid breakage of people's scripts. AK: Enabling by default for i386 will be likely controversal, but let's see what happens AK: Really folks, before complaining just fix your segfaults AK: I bet this will find a lot of silent issues Signed-off-by: Masoud Sharbiani <masouds@google.com> Signed-off-by: Andi Kleen <ak@suse.de> [ Personally, I've found the complaints useful on x86-64, so I'm all for this. That said, I wonder if we could do it more prettily.. -Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* i386: fix iounmap's use of vm_struct's size fieldJeremy Fitzhardinge2007-07-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | get_vm_area always returns an area with an adjacent guard page. That guard page is included in vm_struct.size. iounmap uses vm_struct.size to determine how much address space needs to have change_page_attr applied to it, which will BUG if applied to the guard page. This patch adds a helper function - get_vm_area_size() in linux/vmalloc.h - to return the actual size of a vm area, and uses it to make iounmap do the right thing. There are probably other places which should be using get_vm_area_size(). Thanks to Dave Young <hidave.darkstar@gmail.com> for debugging the problem. [ Andi, it wasn't clear to me whether x86_64 needs the same fix. ] Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Dave Young <hidave.darkstar@gmail.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* i386: pgd_{c,d}tor() staticAdrian Bunk2007-07-221-3/+3
| | | | | | | | | pgd_{c,d}tor() can now become static. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* i386: minor nx handling adjustmentJan Beulich2007-07-221-3/+4
| | | | | | | | Constrain __supported_pte_mask and NX handling to just the PAE kernel. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: Always flush pages in change_page_attrAndi Kleen2007-07-221-3/+17
| | | | | | | | | | | | | | | Fix a bug introduced with the CLFLUSH changes: we must always flush pages changed in cpa(), not just when they are reverted. Reenable CLFLUSH usage with that now (it was temporarily disabled for .22) Add some BUG_ONs Contains fixes from Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-201-2/+1
| | | | | | | | | | | | | | Slab destructors were no longer supported after Christoph's c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* mm: fault feedback #2Nick Piggin2007-07-191-12/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch completes Linus's wish that the fault return codes be made into bit flags, which I agree makes everything nicer. This requires requires all handle_mm_fault callers to be modified (possibly the modifications should go further and do things like fault accounting in handle_mm_fault -- however that would be for another patch). [akpm@linux-foundation.org: fix alpha build] [akpm@linux-foundation.org: fix s390 build] [akpm@linux-foundation.org: fix sparc build] [akpm@linux-foundation.org: fix sparc64 build] [akpm@linux-foundation.org: fix ia64 build] Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Bryan Wu <bryan.wu@analog.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Matthew Wilcox <willy@debian.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Chris Zankel <chris@zankel.net> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ Still apparently needs some ARM and PPC loving - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* paravirt: export __supported_pte_maskJeremy Fitzhardinge2007-07-181-0/+1
| | | | | | | | __supported_pte_mask is needed when constructing pte values. Xen device drivers need to do this to make mappings of foreign pages (ie, pages granted to us by other domains). Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
* paravirt: add an "mm" argument to alloc_ptJeremy Fitzhardinge2007-07-182-2/+2
| | | | | | | It's useful to know which mm is allocating a pagetable. Xen uses this to determine whether the pagetable being added to is pinned or not. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
* Allow DEBUG_RODATA and KPROBES to co-existArjan van de Ven2007-06-221-1/+2
| | | | | | | | | | | | | | | Do not mark the kernel text read only if KPROBES is in the kernel; kprobes needs to hot-patch the kernel text to insert it's instrumentation. In this case, only mark the .rodata segment as read only. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Tested-by: S. P. Prasanna <prasanna@in.ibm.com> Cc: Andi Kleen <ak@suse.de> Cc: William Cohen <wcohen@redhat.com> Cc: Ian McDonald <ian.mcdonald@jandi.co.nz> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: change_page_attr bandaidsAndi Kleen2007-06-201-12/+18
| | | | | | | | | | | | | | | - Disable CLFLUSH again; it is still broken. Always do WBINVD. - Always flush in the i386 case, not only when there are deferred pages. These are both brute-force inefficient fixes, to be improved next release cycle. The changes to i386 are a little more extensive than strictly needed (some dead code added), but it is more similar to the x86-64 version now and the dead code will be used soon. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* enable interrupts in user path of page fault.Steven Rostedt2007-06-081-0/+5
| | | | | | | | | | | | | | | | | | | | This is a minor fix, but what is currently there is essentially wrong. In do_page_fault, if the faulting address from user code happens to be in kernel address space (int *p = (int*)-1; p = 0xbed;) then the do_page_fault handler will jump over the local_irq_enable with the goto bad_area_nosemaphore; But the first line there sees this is user code and goes through the process of sending a signal to send SIGSEGV to the user task. This whole time interrupts are disabled and the task can not be preempted by a higher priority task. This patch always enables interrupts in the user path of the bad_area_nosemaphore. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Detach sched.h from mm.hAlexey Dobriyan2007-05-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: Fix discontigmem + non-HIGHMEM compileLinus Torvalds2007-05-161-6/+3
| | | | | | | | | It's not necessarily a very sane configuration, but people running "make randconfig" noticed it wouldn't compile. This fixes some obvious problems in discontig.c to allow a clean compile. Acked-by: andrew hendry <andrew.hendry@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: i386 supportChristoph Lameter2007-05-122-16/+17
| | | | | | | | | | | | | | SLUB cannot run on i386 at this point because i386 uses the page->private and page->index field of slab pages for the pgd cache. Make SLUB run on i386 by replacing the pgd slab cache with a quicklist. Limit the changes as much as possible. Leave the improvised linked list in place etc etc. This has been working here for a couple of weeks now. Acked-by: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* header cleaning: don't include smp_lock.h when not usedRandy Dunlap2007-05-082-2/+0
| | | | | | | | | | | | Remove includes of <linux/smp_lock.h> where it is not used/needed. Suggested by Al Viro. Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc, sparc64, and arm (all 59 defconfigs). Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* move die notifier handling to common codeChristoph Hellwig2007-05-081-1/+2
| | | | | | | | | | | | | | | | | | | | | | | This patch moves the die notifier handling to common code. Previous various architectures had exactly the same code for it. Note that the new code is compiled unconditionally, this should be understood as an appel to the other architecture maintainer to implement support for it aswell (aka sprinkling a notify_die or two in the proper place) arm had a notifiy_die that did something totally different, I renamed it to arm_notify_die as part of the patch and made it static to the file it's declared and used at. avr32 used to pass slightly less information through this interface and I brought it into line with the other architectures. [akpm@linux-foundation.org: build fix] [akpm@linux-foundation.org: fix vmalloc_sync_all bustage] [bryan.wu@analog.com: fix vmalloc_sync_all in nommu] Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: <linux-arch@vger.kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Bryan Wu <bryan.wu@analog.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* use SLAB_PANIC flag cleanupAkinobu Mita2007-05-081-7/+2
| | | | | | | | | | | | | | Use SLAB_PANIC and delete duplicated panic(). Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Ian Molton <spyro@f2s.com> Cc: David Howells <dhowells@redhat.com> Cc: Andi Kleen <ak@suse.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* get_unmapped_area handles MAP_FIXED on i386Benjamin Herrenschmidt2007-05-071-0/+6
| | | | | | | | | | | | | Handle MAP_FIXED in i386 hugetlb_get_unmapped_area(), just call prepare_hugepage_range. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: William Irwin <bill.irwin@oracle.com> Cc: Andi Kleen <ak@suse.de> Cc: Adam Litke <agl@us.ibm.com> Cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Revert "[PATCH] x86: __pa and __pa_symbol address space separation"Linus Torvalds2007-05-071-8/+7
| | | | | | | | | | | | | | | | | | | | | | | This was broken. It adds complexity, for no good reason. Rather than separate __pa() and __pa_symbol(), we should deprecate __pa_symbol(), and preferably __pa() too - and just use "virt_to_phys()" instead, which is more readable and has nicer semantics. However, right now, just undo the separation, and make __pa_symbol() be the exact same as __pa(). That fixes the bugs this patch introduced, and we can do the fairly obvious cleanups later. Do the new __phys_addr() function (which is now the actual workhorse for the unified __pa()/__pa_symbol()) as a real external function, that way all the potential issues with compile/link-time optimizations of constant symbol addresses go away, and we can also, if we choose to, add more sanity-checking of the argument. Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Cc: Andi Kleen <ak@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] i386: PARAVIRT: flush lazy mmu updates on kunmap_atomicJeremy Fitzhardinge2007-05-021-0/+1
| | | | | | | | kunmap_atomic should flush any pending lazy mmu updates, mainly to be consistent with kmap_atomic, and to preserve its normal behaviour. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: PARAVIRT: add kmap_atomic_pte for mapping highpte pagesJeremy Fitzhardinge2007-05-021-2/+7
| | | | | | | | | | | | | | | | | | | | | | Xen and VMI both have special requirements when mapping a highmem pte page into the kernel address space. These can be dealt with by adding a new kmap_atomic_pte() function for mapping highptes, and hooking it into the paravirt_ops infrastructure. Xen specifically wants to map the pte page RO, so this patch exposes a helper function, kmap_atomic_prot, which maps the page with the specified page protections. This also adds a kmap_flush_unused() function to clear out the cached kmap mappings. Xen needs this to clear out any potential stray RW mappings of pages which will become part of a pagetable. [ Zach - vmi.c will need some attention after this patch. It wasn't immediately obvious to me what needs to be done. ] Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Zachary Amsden <zach@vmware.com>
* [PATCH] i386: PARAVIRT: Allow paravirt backend to choose kernel PMD sharingJeremy Fitzhardinge2007-05-024-24/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normally when running in PAE mode, the 4th PMD maps the kernel address space, which can be shared among all processes (since they all need the same kernel mappings). Xen, however, does not allow guests to have the kernel pmd shared between page tables, so parameterize pgtable.c to allow both modes of operation. There are several side-effects of this. One is that vmalloc will update the kernel address space mappings, and those updates need to be propagated into all processes if the kernel mappings are not intrinsically shared. In the non-PAE case, this is done by maintaining a pgd_list of all processes; this list is used when all process pagetables must be updated. pgd_list is threaded via otherwise unused entries in the page structure for the pgd, which means that the pgd must be page-sized for this to work. Normally the PAE pgd is only 4x64 byte entries large, but Xen requires the PAE pgd to page aligned anyway, so this patch forces the pgd to be page aligned+sized when the kernel pmd is unshared, to accomodate both these requirements. Also, since there may be several distinct kernel pmds (if the user/kernel split is below 3G), there's no point in allocating them from a slab cache; they're just allocated with get_free_page and initialized appropriately. (Of course the could be cached if there is just a single kernel pmd - which is the default with a 3G user/kernel split - but it doesn't seem worthwhile to add yet another case into this code). [ Many thanks to wli for review comments. ] Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Zachary Amsden <zach@vmware.com> Cc: Christoph Lameter <clameter@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* [PATCH] i386: PARAVIRT: Hooks to set up initial pagetableJeremy Fitzhardinge2007-05-021-47/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces paravirt_ops hooks to control how the kernel's initial pagetable is set up. In the case of a native boot, the very early bootstrap code creates a simple non-PAE pagetable to map the kernel and physical memory. When the VM subsystem is initialized, it creates a proper pagetable which respects the PAE mode, large pages, etc. When booting under a hypervisor, there are many possibilities for what paging environment the hypervisor establishes for the guest kernel, so the constructon of the kernel's pagetable depends on the hypervisor. In the case of Xen, the hypervisor boots the kernel with a fully constructed pagetable, which is already using PAE if necessary. Also, Xen requires particular care when constructing pagetables to make sure all pagetables are always mapped read-only. In order to make this easier, kernel's initial pagetable construction has been changed to only allocate and initialize a pagetable page if there's no page already present in the pagetable. This allows the Xen paravirt backend to make a copy of the hypervisor-provided pagetable, allowing the kernel to establish any more mappings it needs while keeping the existing ones. A slightly subtle point which is worth highlighting here is that Xen requires all kernel mappings to share the same pte_t pages between all pagetables, so that updating a kernel page's mapping in one pagetable is reflected in all other pagetables. This makes it possible to allocate a page and attach it to a pagetable without having to explicitly enumerate that page's mapping in all pagetables. And: +From: "Eric W. Biederman" <ebiederm@xmission.com> If we don't set the leaf page table entries it is quite possible that will inherit and incorrect page table entry from the initial boot page table setup in head.S. So we need to redo the effort here, so we pick up PSE, PGE and the like. Hypervisors like Xen require that their page tables be read-only, which is slightly incompatible with our low identity mappings, however I discussed this with Jeremy he has modified the Xen early set_pte function to avoid problems in this area. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Acked-by: William Irwin <bill.irwin@oracle.com> Cc: Ingo Molnar <mingo@elte.hu>
* [PATCH] i386: Relocate VDSO ELF headers to match mapped location with ↵Jeremy Fitzhardinge2007-05-021-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | COMPAT_VDSO Some versions of libc can't deal with a VDSO which doesn't have its ELF headers matching its mapped address. COMPAT_VDSO maps the VDSO at a specific system-wide fixed address. Previously this was all done at build time, on the grounds that the fixed VDSO address is always at the top of the address space. However, a hypervisor may reserve some of that address space, pushing the fixmap address down. This patch does the adjustment dynamically at runtime, depending on the runtime location of the VDSO fixmap. [ Patch has been through several hands: Jan Beulich wrote the orignal version; Zach reworked it, and Jeremy converted it to relocate phdrs as well as sections. ] Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Zachary Amsden <zach@vmware.com> Cc: "Jan Beulich" <JBeulich@novell.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Andi Kleen <ak@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Roland McGrath <roland@redhat.com>
* [PATCH] x86: tighten kernel image page access rightsJan Beulich2007-05-021-6/+19
| | | | | | | | | | | | | | | | | | | | | | | On x86-64, kernel memory freed after init can be entirely unmapped instead of just getting 'poisoned' by overwriting with a debug pattern. On i386 and x86-64 (under CONFIG_DEBUG_RODATA), kernel text and bug table can also be write-protected. Compared to the first version, this one prevents re-creating deleted mappings in the kernel image range on x86-64, if those got removed previously. This, together with the original changes, prevents temporarily having inconsistent mappings when cacheability attributes are being changed on such pages (e.g. from AGP code). While on i386 such duplicate mappings don't exist, the same change is done there, too, both for consistency and because checking pte_present() before using various other pte_XXX functions is a requirement anyway. At once, i386 code gets adjusted to use pte_huge() instead of open coding this. AK: split out cpa() changes Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86: Improve handling of kernel mappings in change_page_attrJan Beulich2007-05-021-2/+2
| | | | | | | | | Fix various broken corner cases in i386 and x86-64 change_page_attr. AK: split off from tighten kernel image access rights Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] x86: __pa and __pa_symbol address space separationVivek Goyal2007-05-021-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently __pa_symbol is for use with symbols in the kernel address map and __pa is for use with pointers into the physical memory map. But the code is implemented so you can usually interchange the two. __pa which is much more common can be implemented much more cheaply if it is it doesn't have to worry about any other kernel address spaces. This is especially true with a relocatable kernel as __pa_symbol needs to peform an extra variable read to resolve the address. There is a third macro that is added for the vsyscall data __pa_vsymbol for finding the physical addesses of vsyscall pages. Most of this patch is simply sorting through the references to __pa or __pa_symbol and using the proper one. A little of it is continuing to use a physical address when we have it instead of recalculating it several times. swapper_pgd is now NULL. leave_mm now uses init_mm.pgd and init_mm.pgd is initialized at boot (instead of compile time) to the physmem virtual mapping of init_level4_pgd. The physical address changed. Except for the for EMPTY_ZERO page all of the remaining references to __pa_symbol appear to be during kernel initialization. So this should reduce the cost of __pa in the common case, even on a relocated kernel. As this is technically a semantic change we need to be on the lookout for anything I missed. But it works for me (tm). Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] i386: adjustments to page table dump during oops (v4)Jan Beulich2007-05-021-20/+35
| | | | | | | | | | | | | | | | | - make the page table contents printing PAE capable - make sure the address stored in current->thread.cr2 is unmodified from what was read from CR2 - don't call oops_may_print() multiple times, when one time suffices - print pte even in highpte case, as long as the pte page isn't in actually in high memory (which is specifically the case for all page tables covering kernel space) (Changes to v3: Use sizeof()*2 rather than the suggested sizeof()*4 for printing width, use fixed 16-nibble width for PAE, and also apply the max_low_pfn range check to the middle level lookup on PAE.) Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] Proper fix for highmem kmap_atomic functions for VMI for 2.6.21Zachary Amsden2007-04-091-0/+2
| | | | | | | | | | | | | | | | | | | | Since lazy MMU batching mode still allows interrupts to enter, it is possible for interrupt handlers to try to use kmap_atomic, which fails when lazy mode is active, since the PTE update to highmem will be delayed. The best workaround is to issue an explicit flush in kmap_atomic_functions case; this is the only way nested PTE updates can happen in the interrupt handler. Thanks to Jeremy Fitzhardinge for noting the bug and suggestions on a fix. This patch gets reverted again when we start 2.6.22 and the bug gets fixed differently. Signed-off-by: Zachary Amsden <zach@vmware.com> Cc: Andi Kleen <ak@muc.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>