diff options
author | Paul Burton <paul.burton@imgtec.com> | 2014-01-27 16:23:11 +0100 |
---|---|---|
committer | Ralf Baechle <ralf@linux-mips.org> | 2014-03-26 23:09:10 +0100 |
commit | 1db1af84d6df99a8e5d6ddea8c7b5c1327c9a620 (patch) | |
tree | 72865fad6fa4bcfabe94ae3642a58bf28a533181 /arch/mips/include/asm/msa.h | |
parent | MIPS: Detect the MSA ASE (diff) | |
download | linux-1db1af84d6df99a8e5d6ddea8c7b5c1327c9a620.tar.xz linux-1db1af84d6df99a8e5d6ddea8c7b5c1327c9a620.zip |
MIPS: Basic MSA context switching support
This patch adds support for context switching the MSA vector registers.
These 128 bit vector registers are aliased with the FP registers - an
FP register accesses the least significant bits of the vector register
with which it is aliased (ie. the register with the same index). Due to
both this & the requirement that the scalar FPU must be 64-bit (FR=1) if
enabled at the same time as MSA the kernel will enable MSA & scalar FP
at the same time for tasks which use MSA. If we restore the MSA vector
context then we might as well enable the scalar FPU since the reason it
was left disabled was to allow for lazy FP context restoring - but we
just restored the FP context as it's a subset of the vector context. If
we restore the FP context and have previously used MSA then we have to
restore the whole vector context anyway (see comment in
enable_restore_fp_context for details) so similarly we might as well
enable MSA.
Thus if a task does not use MSA then it will continue to behave as
without this patch - the scalar FP context will be saved & restored as
usual. But if a task executes an MSA instruction then it will save &
restore the vector context forever more.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6431/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Diffstat (limited to 'arch/mips/include/asm/msa.h')
-rw-r--r-- | arch/mips/include/asm/msa.h | 28 |
1 files changed, 28 insertions, 0 deletions
diff --git a/arch/mips/include/asm/msa.h b/arch/mips/include/asm/msa.h index 0a614fb83348..a2aba6c3ec05 100644 --- a/arch/mips/include/asm/msa.h +++ b/arch/mips/include/asm/msa.h @@ -12,6 +12,9 @@ #include <asm/mipsregs.h> +extern void _save_msa(struct task_struct *); +extern void _restore_msa(struct task_struct *); + static inline void enable_msa(void) { if (cpu_has_msa) { @@ -36,6 +39,31 @@ static inline int is_msa_enabled(void) return read_c0_config5() & MIPS_CONF5_MSAEN; } +static inline int thread_msa_context_live(void) +{ + /* + * Check cpu_has_msa only if it's a constant. This will allow the + * compiler to optimise out code for CPUs without MSA without adding + * an extra redundant check for CPUs with MSA. + */ + if (__builtin_constant_p(cpu_has_msa) && !cpu_has_msa) + return 0; + + return test_thread_flag(TIF_MSA_CTX_LIVE); +} + +static inline void save_msa(struct task_struct *t) +{ + if (cpu_has_msa) + _save_msa(t); +} + +static inline void restore_msa(struct task_struct *t) +{ + if (cpu_has_msa) + _restore_msa(t); +} + #ifdef TOOLCHAIN_SUPPORTS_MSA #define __BUILD_MSA_CTL_REG(name, cs) \ |