summaryrefslogtreecommitdiffstats
path: root/include/math-emu
diff options
context:
space:
mode:
authorDavid Woodhouse <David.Woodhouse@intel.com>2012-12-03 17:25:40 +0100
committerDavid Woodhouse <David.Woodhouse@intel.com>2012-12-06 02:22:31 +0100
commitcf66bb93e0f75e0a4ba1ec070692618fa028e994 (patch)
tree0ae48658adb29f50bdd85a94cbb84670a234f441 /include/math-emu
parentvfs: clear to the end of the buffer on partial buffer reads (diff)
downloadlinux-cf66bb93e0f75e0a4ba1ec070692618fa028e994.tar.xz
linux-cf66bb93e0f75e0a4ba1ec070692618fa028e994.zip
byteorder: allow arch to opt to use GCC intrinsics for byteswapping
Since GCC 4.4, there have been __builtin_bswap32() and __builtin_bswap16() intrinsics. A __builtin_bswap16() came a little later (4.6 for PowerPC, 48 for other platforms). By using these instead of the inline assembler that most architectures have in their __arch_swabXX() macros, we let the compiler see what's actually happening. The resulting code should be at least as good, and much *better* in the cases where it can be combined with a nearby load or store, using a load-and-byteswap or store-and-byteswap instruction (e.g. lwbrx/stwbrx on PowerPC, movbe on Atom). When GCC is sufficiently recent *and* the architecture opts in to using the intrinsics by setting CONFIG_ARCH_USE_BUILTIN_BSWAP, they will be used in preference to the __arch_swabXX() macros. An architecture which does not set ARCH_USE_BUILTIN_BSWAP will continue to use its own hand-crafted macros. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'include/math-emu')
0 files changed, 0 insertions, 0 deletions