diff options
author | Zachary Amsden <zach@vmware.com> | 2007-04-09 01:04:01 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-04-09 04:47:55 +0200 |
commit | 49f19710512c825aaea73b9207b3a848027cda1d (patch) | |
tree | 06da31bd9a84273e12aa43f536f90eb8146ff92e /include/asm-generic/pgtable.h | |
parent | [PATCH] fuse: validate rootmode mount option (diff) | |
download | linux-49f19710512c825aaea73b9207b3a848027cda1d.tar.xz linux-49f19710512c825aaea73b9207b3a848027cda1d.zip |
[PATCH] Proper fix for highmem kmap_atomic functions for VMI for 2.6.21
Since lazy MMU batching mode still allows interrupts to enter, it is
possible for interrupt handlers to try to use kmap_atomic, which fails when
lazy mode is active, since the PTE update to highmem will be delayed. The
best workaround is to issue an explicit flush in kmap_atomic_functions
case; this is the only way nested PTE updates can happen in the interrupt
handler.
Thanks to Jeremy Fitzhardinge for noting the bug and suggestions on a fix.
This patch gets reverted again when we start 2.6.22 and the bug gets fixed
differently.
Signed-off-by: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/asm-generic/pgtable.h')
-rw-r--r-- | include/asm-generic/pgtable.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 00c23433b39f..6d7e279b1490 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -180,6 +180,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE #define arch_enter_lazy_mmu_mode() do {} while (0) #define arch_leave_lazy_mmu_mode() do {} while (0) +#define arch_flush_lazy_mmu_mode() do {} while (0) #endif /* @@ -193,6 +194,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres #ifndef __HAVE_ARCH_ENTER_LAZY_CPU_MODE #define arch_enter_lazy_cpu_mode() do {} while (0) #define arch_leave_lazy_cpu_mode() do {} while (0) +#define arch_flush_lazy_cpu_mode() do {} while (0) #endif /* |