summaryrefslogtreecommitdiffstats
path: root/arch/arm/lib/memzero.S (follow)
Commit message (Collapse)AuthorAgeFilesLines
* ARM: 8745/1: get rid of __memzero()Nicolas Pitre2018-01-211-137/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The __memzero assembly code is almost identical to memset's except for two orr instructions. The runtime performance of __memset(p, n) and memset(p, 0, n) is accordingly almost identical. However, the memset() macro used to guard against a zero length and to call __memzero at compile time when the fill value is a constant zero interferes with compiler optimizations. Arnd found tha the test against a zero length brings up some new warnings with gcc v8: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82103 And successively rremoving the test against a zero length and the call to __memzero optimization produces the following kernel sizes for defconfig with gcc 6: text data bss dec hex filename 12248142 6278960 413588 18940690 1210312 vmlinux.orig 12244474 6278960 413588 18937022 120f4be vmlinux.no_zero_test 12239160 6278960 413588 18931708 120dffc vmlinux.no_memzero So it is probably not worth keeping __memzero around given that the compiler can do a better job at inlining trivial memset(p,0,n) on its own. And the memset code already handles a zero length just fine. Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
* Revert "arm: move exports to definitions"Russell King2016-11-231-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 4dd1837d7589f468ed109556513f476e7a7f9121. Moving the exports for assembly code into the assembly files breaks KSYM trimming, but also breaks modversions. While fixing the KSYM trimming is trivial, fixing modversions brings us to a technically worse position that we had prior to the above change: - We end up with the prototype definitions divorsed from everything else, which means that adding or removing assembly level ksyms become more fragile: * if adding a new assembly ksyms export, a missed prototype in asm-prototypes.h results in a successful build if no module in the selected configuration makes use of the symbol. * when removing a ksyms export, asm-prototypes.h will get forgotten, with armksyms.c, you'll get a build error if you forget to touch the file. - We end up with the same amount of include files and prototypes, they're just in a header file instead of a .c file with their exports. As for lines of code, we don't get much of a size reduction: (original commit) 47 files changed, 131 insertions(+), 208 deletions(-) (fix for ksyms trimming) 7 files changed, 18 insertions(+), 5 deletions(-) (two fixes for modversions) 1 file changed, 34 insertions(+) 3 files changed, 7 insertions(+), 2 deletions(-) which results in a net total of only 25 lines deleted. As there does not seem to be much benefit from this change of approach, revert the change. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
* arm: move exports to definitionsAl Viro2016-08-081-0/+2
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* ARM: 8223/1: Add unwinding support for __memzero functionLin Yongting2014-11-271-0/+12
| | | | | | | | | | | | | | The __memzero function never had unwinding annotations added. Currently, when accessing invalid pointer by __memzero occurs the backtrace shown will stop at __memzero or some completely unrelated function. Add unwinding annotations in hopes of getting a more useful backtrace in following cases: 1. die on accessing invalid pointer by __memzero 2. kprobe trapped at any instruction within __memzero 3. interrupted at any instruction within __memzero Signed-off-by: Lin Yongting <linyongting@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King2014-07-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: Shawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 5227/1: Add the ENDPROC declarations to the .S filesCatalin Marinas2008-09-011-0/+1
| | | | | | | | | This declaration specifies the "function" type and size for various assembly functions, mainly needed for generating the correct branch instructions in Thumb-2. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] cache align memset and memzeroNicolas Pitre2008-06-221-0/+44
| | | | | | | | This is a natural extension following the previous patch. Non Feroceon based targets are unchanged. Signed-off-by: Nicolas Pitre <nico@marvell.com> Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
* [ARM] Remove LOADREGS macroRussell King2006-06-251-1/+1
| | | | | | | As for RETINSTR, LOADREGS is a left-over from the 26-bit days. Remove it. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Remove RETINSTR macroRussell King2006-06-251-1/+1
| | | | | | | | RETINSTR is a left-over from the days when we had 26-bit and 32-bit CPU support integrated into the same tree. Since this is no longer the case, we can now remove RETINSTR. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-171-0/+80
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!