summaryrefslogtreecommitdiffstats
path: root/arch/arm64/include
diff options
context:
space:
mode:
authorJungseok Lee <jungseoklee85@gmail.com>2014-12-02 18:49:24 +0100
committerWill Deacon <will.deacon@arm.com>2014-12-03 11:19:35 +0100
commite4f88d833bec29b8e6fadc1b2488f0c6370935e1 (patch)
treee4062286dd04734147b5901d3d1e86bd7cacdb1c /arch/arm64/include
parentarm64: compat: align cacheflush syscall with arch/arm (diff)
downloadlinux-e4f88d833bec29b8e6fadc1b2488f0c6370935e1.tar.xz
linux-e4f88d833bec29b8e6fadc1b2488f0c6370935e1.zip
arm64: Implement support for read-mostly sections
As putting data which is read mostly together, we can avoid unnecessary cache line bouncing. Other architectures, such as ARM and x86, adopted the same idea. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'arch/arm64/include')
-rw-r--r--arch/arm64/include/asm/cache.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 88cc05b5f3ac..bde449936e2f 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -32,6 +32,8 @@
#ifndef __ASSEMBLY__
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))
+
static inline int cache_line_size(void)
{
u32 cwg = cache_type_cwg();