diff options
author | Horms <horms@verge.net.au> | 2006-12-12 09:49:03 +0100 |
---|---|---|
committer | Tony Luck <tony.luck@intel.com> | 2006-12-12 19:11:00 +0100 |
commit | 45a98fc622ae700eed34eb2be00743910d50dbe1 (patch) | |
tree | e5e5279c25582a7d26c37af189330318fe0f42dd /arch/ia64/kernel/crash_dump.c | |
parent | [IA64] Do not call SN_SAL_SET_CPU_NUMBER twice on cpu 0 (diff) | |
download | linux-45a98fc622ae700eed34eb2be00743910d50dbe1.tar.xz linux-45a98fc622ae700eed34eb2be00743910d50dbe1.zip |
[IA64] CONFIG_KEXEC/CONFIG_CRASH_DUMP permutations
Actually, on reflection I think that there is a good case for
keeping the options separate. I am thinking particularly of people
who want a very small crashdump kernel and thus don't want to compile
in kexec.
The patch below should fix things up so that all valid combinations of
KEXEC, CRASH_DUMP and VMCORE compile cleanly - VMCORE depends on
CRASH_DUMP which is why I said valid combinations. In a nutshell
it just untangles unrelated code and switches around a few defines.
Please note that it creats a new file, arch/ia64/kernel/crash_dump.c
This is in keeping with the i386 implementation.
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Diffstat (limited to 'arch/ia64/kernel/crash_dump.c')
-rw-r--r-- | arch/ia64/kernel/crash_dump.c | 48 |
1 files changed, 48 insertions, 0 deletions
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c new file mode 100644 index 000000000000..83b8c91c1408 --- /dev/null +++ b/arch/ia64/kernel/crash_dump.c @@ -0,0 +1,48 @@ +/* + * kernel/crash_dump.c - Memory preserving reboot related code. + * + * Created by: Simon Horman <horms@verge.net.au> + * Original code moved from kernel/crash.c + * Original code comment copied from the i386 version of this file + */ + +#include <linux/errno.h> +#include <linux/types.h> + +#include <linux/uaccess.h> + +/** + * copy_oldmem_page - copy one page from "oldmem" + * @pfn: page frame number to be copied + * @buf: target memory address for the copy; this can be in kernel address + * space or user address space (see @userbuf) + * @csize: number of bytes to copy + * @offset: offset in bytes into the page (based on pfn) to begin the copy + * @userbuf: if set, @buf is in user address space, use copy_to_user(), + * otherwise @buf is in kernel address space, use memcpy(). + * + * Copy a page from "oldmem". For this page, there is no pte mapped + * in the current kernel. We stitch up a pte, similar to kmap_atomic. + * + * Calling copy_to_user() in atomic context is not desirable. Hence first + * copying the data to a pre-allocated kernel page and then copying to user + * space in non-atomic context. + */ +ssize_t +copy_oldmem_page(unsigned long pfn, char *buf, + size_t csize, unsigned long offset, int userbuf) +{ + void *vaddr; + + if (!csize) + return 0; + vaddr = __va(pfn<<PAGE_SHIFT); + if (userbuf) { + if (copy_to_user(buf, (vaddr + offset), csize)) { + return -EFAULT; + } + } else + memcpy(buf, (vaddr + offset), csize); + return csize; +} + |