diff options
author | Arnd Bergmann <arnd@arndb.de> | 2006-01-05 15:05:29 +0100 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2006-01-09 05:44:57 +0100 |
commit | 2fb9d2063626374dd8a2514b3a730facac8235d8 (patch) | |
tree | b410dcdbc5aee656c37951be36951130450549e7 /include/asm-powerpc/spu.h | |
parent | [PATCH] spufs: fix allocation on 64k pages (diff) | |
download | linux-2fb9d2063626374dd8a2514b3a730facac8235d8.tar.xz linux-2fb9d2063626374dd8a2514b3a730facac8235d8.zip |
[PATCH] spufs: set irq affinity for running threads
For far, all SPU triggered interrupts always end up on
the first SMT thread, which is a bad solution.
This patch implements setting the affinity to the
CPU that was running last when entering execution on
an SPU. This should result in a significant reduction
in IPI calls and better cache locality for SPE thread
specific data.
Signed-off-by: Arnd Bergmann <arndb@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'include/asm-powerpc/spu.h')
-rw-r--r-- | include/asm-powerpc/spu.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/asm-powerpc/spu.h b/include/asm-powerpc/spu.h index 692aa60e9903..38bacf2f6e0c 100644 --- a/include/asm-powerpc/spu.h +++ b/include/asm-powerpc/spu.h @@ -147,6 +147,7 @@ struct spu *spu_alloc(void); void spu_free(struct spu *spu); int spu_irq_class_0_bottom(struct spu *spu); int spu_irq_class_1_bottom(struct spu *spu); +void spu_irq_setaffinity(struct spu *spu, int cpu); extern struct spufs_calls { asmlinkage long (*create_thread)(const char __user *name, |