diff options
author | Paul Mundt <lethal@linux-sh.org> | 2010-01-13 04:51:40 +0100 |
---|---|---|
committer | Paul Mundt <lethal@linux-sh.org> | 2010-01-13 04:51:40 +0100 |
commit | 0ea820cf9bf58f735ed40ec67947159c4f170012 (patch) | |
tree | 77320006b4dded5804c678c1a869571be5c0b95f /arch/sh/kernel/process_32.c | |
parent | sh: Use SLAB_PANIC for thread_info slab cache. (diff) | |
download | linux-0ea820cf9bf58f735ed40ec67947159c4f170012.tar.xz linux-0ea820cf9bf58f735ed40ec67947159c4f170012.zip |
sh: Move over to dynamically allocated FPU context.
This follows the x86 xstate changes and implements a task_xstate slab
cache that is dynamically sized to match one of hard FP/soft FP/FPU-less.
This also tidies up and consolidates some of the SH-2A/SH-4 FPU
fragmentation. Now fpu state restorers are commonly defined, with the
init_fpu()/fpu_init() mess reworked to follow the x86 convention.
The fpu_init() register initialization has been replaced by xstate setup
followed by writing out to hardware via the standard restore path.
As init_fpu() now performs a slab allocation a secondary lighterweight
restorer is also introduced for the context switch.
In the future the DSP state will be rolled in here, too.
More work remains for math emulation and the SH-5 FPU, which presently
uses its own special (UP-only) interfaces.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Diffstat (limited to 'arch/sh/kernel/process_32.c')
-rw-r--r-- | arch/sh/kernel/process_32.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/arch/sh/kernel/process_32.c b/arch/sh/kernel/process_32.c index c4361402ec5e..03de6573aa76 100644 --- a/arch/sh/kernel/process_32.c +++ b/arch/sh/kernel/process_32.c @@ -156,6 +156,8 @@ void start_thread(struct pt_regs *regs, unsigned long new_pc, regs->sr = SR_FD; regs->pc = new_pc; regs->regs[15] = new_sp; + + free_thread_xstate(current); } EXPORT_SYMBOL(start_thread); @@ -316,7 +318,7 @@ __switch_to(struct task_struct *prev, struct task_struct *next) /* we're going to use this soon, after a few expensive things */ if (next->fpu_counter > 5) - prefetch(&next_t->fpu.hard); + prefetch(next_t->xstate); #ifdef CONFIG_MMU /* @@ -353,7 +355,7 @@ __switch_to(struct task_struct *prev, struct task_struct *next) * chances of needing FPU soon are obviously high now */ if (next->fpu_counter > 5) - fpu_state_restore(task_pt_regs(next)); + __fpu_state_restore(); return prev; } |