summaryrefslogtreecommitdiffstats
path: root/arch/openrisc/include
diff options
context:
space:
mode:
authorStafford Horne <shorne@gmail.com>2016-03-21 08:16:46 +0100
committerStafford Horne <shorne@gmail.com>2017-02-24 20:14:36 +0100
commitf5d45dc9116b17ee830d3425ece1e9485c9bab88 (patch)
tree1ad140d3860d795bf9e425d1aa5d34faf0514c22 /arch/openrisc/include
parentopenrisc: Add optimized memset (diff)
downloadlinux-f5d45dc9116b17ee830d3425ece1e9485c9bab88.tar.xz
linux-f5d45dc9116b17ee830d3425ece1e9485c9bab88.zip
openrisc: Add optimized memcpy routine
The generic memcpy routine provided in kernel does only byte copies. Using word copies we can lower boot time and cycles spend in memcpy quite significantly. Booting on my de0 nano I see boot times go from 7.2 to 5.6 seconds. The avg cycles in memcpy during boot go from 6467 to 1887. I tested several algorithms (see code in previous patch mails) The implementations I tested and avg cycles: - Word Copies + Loop Unrolls + Non Aligned 1882 - Word Copies + Loop Unrolls 1887 - Word Copies 2441 - Byte Copies + Loop Unrolls 6467 - Byte Copies 7600 In the end I ended up going with Word Copies + Loop Unrolls as it provides best tradeoff between simplicity and boot speedups. Signed-off-by: Stafford Horne <shorne@gmail.com>
Diffstat (limited to 'arch/openrisc/include')
-rw-r--r--arch/openrisc/include/asm/string.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/openrisc/include/asm/string.h b/arch/openrisc/include/asm/string.h
index 33470d4d6948..64939ccd7531 100644
--- a/arch/openrisc/include/asm/string.h
+++ b/arch/openrisc/include/asm/string.h
@@ -4,4 +4,7 @@
#define __HAVE_ARCH_MEMSET
extern void *memset(void *s, int c, __kernel_size_t n);
+#define __HAVE_ARCH_MEMCPY
+extern void *memcpy(void *dest, __const void *src, __kernel_size_t n);
+
#endif /* __ASM_OPENRISC_STRING_H */