summaryrefslogtreecommitdiffstats
path: root/arch (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-linus' of ↵Linus Torvalds2016-12-146-9/+9
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull namespace updates from Eric Biederman: "After a lot of discussion and work we have finally reachanged a basic understanding of what is necessary to make unprivileged mounts safe in the presence of EVM and IMA xattrs which the last commit in this series reflects. While technically it is a revert the comments it adds are important for people not getting confused in the future. Clearing up that confusion allows us to seriously work on unprivileged mounts of fuse in the next development cycle. The rest of the fixes in this set are in the intersection of user namespaces, ptrace, and exec. I started with the first fix which started a feedback cycle of finding additional issues during review and fixing them. Culiminating in a fix for a bug that has been present since at least Linux v1.0. Potentially these fixes were candidates for being merged during the rc cycle, and are certainly backport candidates but enough little things turned up during review and testing that I decided they should be handled as part of the normal development process just to be certain there were not any great surprises when it came time to backport some of these fixes" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: Revert "evm: Translate user/group ids relative to s_user_ns when computing HMAC" exec: Ensure mm->user_ns contains the execed files ptrace: Don't allow accessing an undumpable mm ptrace: Capture the ptracer's creds not PT_PTRACE_CAP mm: Add a user_ns owner to mm_struct and fix ptrace permission checks
| * ptrace: Don't allow accessing an undumpable mmEric W. Biederman2016-11-226-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is the reasonable expectation that if an executable file is not readable there will be no way for a user without special privileges to read the file. This is enforced in ptrace_attach but if ptrace is already attached before exec there is no enforcement for read-only executables. As the only way to read such an mm is through access_process_vm spin a variant called ptrace_access_vm that will fail if the target process is not being ptraced by the current process, or the current process did not have sufficient privileges when ptracing began to read the target processes mm. In the ptrace implementations replace access_process_vm by ptrace_access_vm. There remain several ptrace sites that still use access_process_vm as they are reading the target executables instructions (for kernel consumption) or register stacks. As such it does not appear necessary to add a permission check to those calls. This bug has always existed in Linux. Fixes: v1.0 Cc: stable@vger.kernel.org Reported-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
* | Merge branch 'linus' of ↵Linus Torvalds2016-12-1443-1359/+7337
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Here is the crypto update for 4.10: API: - add skcipher walk interface - add asynchronous compression (acomp) interface - fix algif_aed AIO handling of zero buffer Algorithms: - fix unaligned access in poly1305 - fix DRBG output to large buffers Drivers: - add support for iMX6UL to caam - fix givenc descriptors (used by IPsec) in caam - accelerated SHA256/SHA512 for ARM64 from OpenSSL - add SSE CRCT10DIF and CRC32 to ARM/ARM64 - add AEAD support to Chelsio chcr - add Armada 8K support to omap-rng" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (148 commits) crypto: testmgr - fix overlap in chunked tests again crypto: arm/crc32 - accelerated support based on x86 SSE implementation crypto: arm64/crc32 - accelerated support based on x86 SSE implementation crypto: arm/crct10dif - port x86 SSE implementation to ARM crypto: arm64/crct10dif - port x86 SSE implementation to arm64 crypto: testmgr - add/enhance test cases for CRC-T10DIF crypto: testmgr - avoid overlap in chunked tests crypto: chcr - checking for IS_ERR() instead of NULL crypto: caam - check caam_emi_slow instead of re-lookup platform crypto: algif_aead - fix AIO handling of zero buffer crypto: aes-ce - Make aes_simd_algs static crypto: algif_skcipher - set error code when kcalloc fails crypto: caam - make aamalg_desc a proper module crypto: caam - pass key buffers with typesafe pointers crypto: arm64/aes-ce-ccm - Fix AEAD decryption length MAINTAINERS: add crypto headers to crypto entry crypt: doc - remove misleading mention of async API crypto: doc - fix header file name crypto: api - fix comment typo crypto: skcipher - Add separate walker for AEAD decryption ..
| * | crypto: arm/crc32 - accelerated support based on x86 SSE implementationArd Biesheuvel2016-12-074-0/+555
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a combination of the the Intel algorithm implemented using SSE and PCLMULQDQ instructions from arch/x86/crypto/crc32-pclmul_asm.S, and the new CRC32 extensions introduced for both 32-bit and 64-bit ARM in version 8 of the architecture. Two versions of the above combo are provided, one for CRC32 and one for CRC32C. The PMULL/NEON algorithm is faster, but operates on blocks of at least 64 bytes, and on multiples of 16 bytes only. For the remaining input, or for all input on systems that lack the PMULL 64x64->128 instructions, the CRC32 instructions will be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/crc32 - accelerated support based on x86 SSE implementationArd Biesheuvel2016-12-074-0/+487
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a combination of the the Intel algorithm implemented using SSE and PCLMULQDQ instructions from arch/x86/crypto/crc32-pclmul_asm.S, and the new CRC32 extensions introduced for both 32-bit and 64-bit ARM in version 8 of the architecture. Two versions of the above combo are provided, one for CRC32 and one for CRC32C. The PMULL/NEON algorithm is faster, but operates on blocks of at least 64 bytes, and on multiples of 16 bytes only. For the remaining input, or for all input on systems that lack the PMULL 64x64->128 instructions, the CRC32 instructions will be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm/crct10dif - port x86 SSE implementation to ARMArd Biesheuvel2016-12-074-0/+535
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a transliteration of the Intel algorithm implemented using SSE and PCLMULQDQ instructions that resides in the file arch/x86/crypto/crct10dif-pcl-asm_64.S, but simplified to only operate on buffers that are 16 byte aligned (but of any size) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/crct10dif - port x86 SSE implementation to arm64Ard Biesheuvel2016-12-074-0/+495
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a transliteration of the Intel algorithm implemented using SSE and PCLMULQDQ instructions that resides in the file arch/x86/crypto/crct10dif-pcl-asm_64.S, but simplified to only operate on buffers that are 16 byte aligned (but of any size) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: aes-ce - Make aes_simd_algs staticHerbert Xu2016-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The variable aes_simd_algs should be static. In fact if it isn't it causes build errors when multiple copies of aes-ce-glue.c are built into the kernel. Fixes: da40e7a4ba4d ("crypto: aes-ce - Convert to skcipher") Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/aes-ce-ccm - Fix AEAD decryption lengthHerbert Xu2016-12-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes the ARM64 CE CCM implementation decryption by using skcipher_walk_aead_decrypt instead of skcipher_walk_aead, which ensures the correct length is used when doing the walk. Fixes: cf2c0fe74084 ("crypto: aes-ce-ccm - Use skcipher walk interface") Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm/aesbs - fix brokenness after skcipher conversionArd Biesheuvel2016-11-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | The CBC encryption routine should use the encryption round keys, not the decryption round keys. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/aes-ce-ctr - fix skcipher conversionArd Biesheuvel2016-11-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Fix a missing statement that got lost in the skcipher conversion of the CTR transform. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm/aes-ce - fix broken monolithic buildArd Biesheuvel2016-11-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When building the arm64 kernel with both CONFIG_CRYPTO_AES_ARM64_CE_BLK=y and CONFIG_CRYPTO_AES_ARM64_NEON_BLK=y configured, the build breaks with the following error: arch/arm64/crypto/aes-neon-blk.o:(.bss+0x0): multiple definition of `aes_simd_algs' arch/arm64/crypto/aes-ce-blk.o:(.bss+0x0): first defined here Fix this by making aes_simd_algs 'static'. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm/aes - Add missing SIMD select for aesbsHerbert Xu2016-11-301-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds one more missing SIMD select for AES_ARM_BS. It also changes selects on ALGAPI to BLKCIPHER. Fixes: 211f41af534a ("crypto: aesbs - Convert to skcipher") Reported-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm/aes - Select SIMD in KconfigHerbert Xu2016-11-292-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The skcipher conversion for ARM missed the select on CRYPTO_SIMD, causing build failures if SIMD was not otherwise enabled. Fixes: da40e7a4ba4d ("crypto: aes-ce - Convert to skcipher") Fixes: 211f41af534a ("crypto: aesbs - Convert to skcipher") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/sha2 - add generated .S files to .gitignoreArd Biesheuvel2016-11-291-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add the files that are generated by the recently merged OpenSSL SHA-256/512 implementation to .gitignore so Git disregards them when showing untracked files. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: aesbs - Convert to skcipherHerbert Xu2016-11-281-228/+152
| | | | | | | | | | | | | | | | | | This patch converts aesbs over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: aes-ce - Convert to skcipherHerbert Xu2016-11-281-233/+157
| | | | | | | | | | | | | | | | | | This patch converts aes-ce over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/aes - Convert to skcipherHerbert Xu2016-11-281-224/+158
| | | | | | | | | | | | | | | | | | This patch converts arm64/aes over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: aesni - Convert to skcipherHerbert Xu2016-11-282-569/+343
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch converts aesni (including fpu) over to the skcipher interface. The LRW implementation has been removed as the generic LRW code can now be used directly on top of the accelerated ECB implementation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: glue_helper - Add skcipher xts helpersHerbert Xu2016-11-282-2/+111
| | | | | | | | | | | | | | | | | | | | | This patch adds xts helpers that use the skcipher interface rather than blkcipher. This will be used by aesni_intel. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: aes-ce-ccm - Use skcipher walk interfaceHerbert Xu2016-11-281-37/+13
| | | | | | | | | | | | | | | | | | | | | This patch makes use of the new skcipher walk interface instead of the obsolete blkcipher walk interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: crc32c-vpmsum - Rename CRYPT_CRC32C_VPMSUM optionJean Delvare2016-11-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For consistency with the other 246 kernel configuration options, rename CRYPT_CRC32C_VPMSUM to CRYPTO_CRC32C_VPMSUM. Signed-off-by: Jean Delvare <jdelvare@suse.de> Cc: Anton Blanchard <anton@samba.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/sha2 - integrate OpenSSL implementations of SHA256/SHA512Ard Biesheuvel2016-11-287-0/+4228
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This integrates both the accelerated scalar and the NEON implementations of SHA-224/256 as well as SHA-384/512 from the OpenSSL project. Relative performance compared to the respective generic C versions: | SHA256-scalar | SHA256-NEON* | SHA512 | ------------+-----------------+--------------+----------+ Cortex-A53 | 1.63x | 1.63x | 2.34x | Cortex-A57 | 1.43x | 1.59x | 1.95x | Cortex-A73 | 1.26x | 1.56x | ? | The core crypto code was authored by Andy Polyakov of the OpenSSL project, in collaboration with whom the upstream code was adapted so that this module can be built from the same version of sha512-armv8.pl. The version in this patch was taken from OpenSSL commit 32bbb62ea634 ("sha/asm/sha512-armv8.pl: fix big-endian support in __KERNEL__ case.") * The core SHA algorithm is fundamentally sequential, but there is a secondary transformation involved, called the schedule update, which can be performed independently. The NEON version of SHA-224/SHA-256 only implements this part of the algorithm using NEON instructions, the sequential part is always done using scalar instructions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: sha-mb - Fix total_len for correct hash when larger than 512MBGreg Tucker2016-11-176-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current multi-buffer hash implementations have a restriction on the total length of a hash job to 512MB. Hashing larger buffers will result in an incorrect hash. This extends the limit to 2^62 - 1. Signed-off-by: Greg Tucker <greg.b.tucker@intel.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm/aes-ce - fix for big endianArd Biesheuvel2016-10-211-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AES key schedule generation is mostly endian agnostic, with the exception of the rotation and the incorporation of the round constant at the start of each round. So implement a big endian specific version of that part to make the whole routine big endian compatible. Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/aes-xts-ce: fix for big endianArd Biesheuvel2016-10-212-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | Emit the XTS tweak literal constants in the appropriate order for a single 128-bit scalar literal load. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/aes-neon - fix for big endianArd Biesheuvel2016-10-211-10/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AES implementation using pure NEON instructions relies on the generic AES key schedule generation routines, which store the round keys as arrays of 32-bit quantities stored in memory using native endianness. This means we should refer to these round keys using 4x4 loads rather than 16x1 loads. In addition, the ShiftRows tables are loading using a single scalar load, which is also affected by endianness, so emit these tables in the correct order depending on whether we are building for big endian or not. Fixes: 49788fe2a128 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/aes-ccm-ce: fix for big endianArd Biesheuvel2016-10-211-26/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AES-CCM implementation that uses ARMv8 Crypto Extensions instructions refers to the AES round keys as pairs of 64-bit quantities, which causes failures when building the code for big endian. In addition, it byte swaps the input counter unconditionally, while this is only required for little endian builds. So fix both issues. Fixes: 12ac3efe74f8 ("arm64/crypto: use crypto instructions to generate AES key schedule") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/sha2-ce - fix for big endianArd Biesheuvel2016-10-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SHA256 digest is an array of 8 32-bit quantities, so we should refer to them as such in order for this code to work correctly when built for big endian. So replace 16 byte scalar loads and stores with 4x32 vector ones where appropriate. Fixes: 6ba6c74dfc6b ("arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/sha1-ce - fix for big endianArd Biesheuvel2016-10-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SHA1 digest is an array of 5 32-bit quantities, so we should refer to them as such in order for this code to work correctly when built for big endian. So replace 16 byte scalar loads and stores with 4x4 vector ones where appropriate. Fixes: 2c98833a42cd ("arm64/crypto: SHA-1 using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/ghash-ce - fix for big endianArd Biesheuvel2016-10-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GHASH key and digest are both pairs of 64-bit quantities, but the GHASH code does not always refer to them as such, causing failures when built for big endian. So replace the 16x1 loads and stores with 2x8 ones. Fixes: b913a6404ce2 ("arm64/crypto: improve performance of GHASH algorithm") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: arm64/aes-ce - fix for big endianArd Biesheuvel2016-10-211-10/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The core AES cipher implementation that uses ARMv8 Crypto Extensions instructions erroneously loads the round keys as 64-bit quantities, which causes the algorithm to fail when built for big endian. In addition, the key schedule generation routine fails to take endianness into account as well, when loading the combining the input key with the round constants. So fix both issues. Fixes: 12ac3efe74f8 ("arm64/crypto: use crypto instructions to generate AES key schedule") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | arm64: dts: marvell: add TRNG description for Armada 8K CPRomain Perier2016-10-212-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | This commits adds the devicetree description of the SafeXcel IP-76 TRNG found in the two Armada CP110. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | | Merge branch 'for-linus' of ↵Linus Torvalds2016-12-146-7/+7
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial Pull trivial updates from Jiri Kosina. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: NTB: correct ntb_spad_count comment typo misc: ibmasm: fix typo in error message Remove references to dead make variable LINUX_INCLUDE Remove last traces of ikconfig.h treewide: Fix printk() message errors Documentation/device-mapper: s/getsize/getsz/
| * | | Remove references to dead make variable LINUX_INCLUDEPaul Bolle2016-12-142-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4fd06960f120 ("Use the new x86 setup code for i386") introduced a reference to the make variable LINUX_INCLUDE. That reference got moved around a bit and copied twice and now there are three references to it. There has never been a definition of that variable. (Presumably that is because it started out as a mistyped reference to LINUXINCLUDE.) So this reference has always been an empty string. Let's remove it before it spreads any further. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * | | treewide: Fix printk() message errorsMasanari Iida2016-12-144-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fix spelling typos in printk and kconfig. Signed-off-by: Masanari Iida <standby24x7@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | | | Merge tag 'arm64-upstream' of ↵Linus Torvalds2016-12-1480-471/+1730
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - struct thread_info moved off-stack (also touching include/linux/thread_info.h and include/linux/restart_block.h) - cpus_have_cap() reworked to avoid __builtin_constant_p() for static key use (also touching drivers/irqchip/irq-gic-v3.c) - uprobes support (currently only for native 64-bit tasks) - Emulation of kernel Privileged Access Never (PAN) using TTBR0_EL1 switching to a reserved page table - CPU capacity information passing via DT or sysfs (used by the scheduler) - support for systems without FP/SIMD (IOW, kernel avoids touching these registers; there is no soft-float ABI, nor kernel emulation for AArch64 FP/SIMD) - handling of hardware watchpoint with unaligned addresses, varied lengths and offsets from base - use of the page table contiguous hint for kernel mappings - hugetlb fixes for sizes involving the contiguous hint - remove unnecessary I-cache invalidation in flush_cache_range() - CNTHCTL_EL2 access fix for CPUs with VHE support (ARMv8.1) - boot-time checks for writable+executable kernel mappings - simplify asm/opcodes.h and avoid including the 32-bit ARM counterpart and make the arm64 kernel headers self-consistent (Xen headers patch merged separately) - Workaround for broken .inst support in certain binutils versions * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (60 commits) arm64: Disable PAN on uaccess_enable() arm64: Work around broken .inst when defective gas is detected arm64: Add detection code for broken .inst support in binutils arm64: Remove reference to asm/opcodes.h arm64: Get rid of asm/opcodes.h arm64: smp: Prevent raw_smp_processor_id() recursion arm64: head.S: Fix CNTHCTL_EL2 access on VHE system arm64: Remove I-cache invalidation from flush_cache_range() arm64: Enable HIBERNATION in defconfig arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN arm64: xen: Enable user access before a privcmd hvc call arm64: Handle faults caused by inadvertent user access with PAN enabled arm64: Disable TTBR0_EL1 during normal kernel execution arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro arm64: Factor out PAN enabling/disabling into separate uaccess_* macros arm64: Update the synchronous external abort fault description selftests: arm64: add test for unaligned/inexact watchpoint handling arm64: Allow hw watchpoint of length 3,5,6 and 7 arm64: hw_breakpoint: Handle inexact watchpoint addresses ...
| * | | | arm64: Disable PAN on uaccess_enable()Marc Zyngier2016-12-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4b65a5db3627 ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") added conditional user access enable/disable. Unfortunately, a typo prevents the PAN bit from being cleared for user access functions. Restore the PAN functionality by adding the missing '!'. Fixes: b65a5db3627 ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1") Reported-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | arm64: Work around broken .inst when defective gas is detectedMarc Zyngier2016-12-061-4/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | .inst being largely broken with older binutils, it'd be better not to emit it altogether when detecting such configuration (as it leads to all kind of horrors when using alternatives). Generalize the __emit_inst macro and use it extensively in asm/sysreg.h, and make it generate a .long when a broken gas is detected. The disassembly will be crap, but at least we can write semi-sane code. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | arm64: Add detection code for broken .inst support in binutilsMarc Zyngier2016-12-061-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Binutils version up to (and including) 2.25 have a pathological behaviour when it comes to mixing .inst directive and arithmetic involving labels. The assembler complains about non-constant expressions and compilation stops pretty quickly. In order to detect this and work around it, let's add a bit of detection code that will set the CONFIG_BROKEN_GAS_INST option should a broken gas be detected. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | arm64: Remove reference to asm/opcodes.hMarc Zyngier2016-12-051-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The asm/opcodes.h file is now gone, but probes.h still references it for not obvious reason. Removing the #include directive fixes the compilation. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | arm64: Get rid of asm/opcodes.hMarc Zyngier2016-12-024-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The opcodes.h drags in a lot of definition from the 32bit port, most of which is not required at all. Clean things up a bit by moving the bare minimum of what is required next to the actual users, and drop the include file. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | arm64: smp: Prevent raw_smp_processor_id() recursionRobin Murphy2016-12-021-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Under CONFIG_DEBUG_PREEMPT=y, this_cpu_ptr() ends up calling back into raw_smp_processor_id(), resulting in some hilariously catastrophic infinite recursion. In the normal case, we have: #define this_cpu_ptr(ptr) raw_cpu_ptr(ptr) and everything is dandy. However for CONFIG_DEBUG_PREEMPT, this_cpu_ptr() is defined in terms of my_cpu_offset, wherein the fun begins: #define my_cpu_offset per_cpu_offset(smp_processor_id()) ... #define smp_processor_id() debug_smp_processor_id() ... notrace unsigned int debug_smp_processor_id(void) { return check_preemption_disabled("smp_processor_id", ""); ... notrace static unsigned int check_preemption_disabled(const char *what1, const char *what2) { int this_cpu = raw_smp_processor_id(); and bang. Use raw_cpu_ptr() directly to avoid that. Fixes: 57c82954e77f ("arm64: make cpu number a percpu variable") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | Merge Will Deacon's for-next/perf branch into for-next/coreCatalin Marinas2016-11-293-42/+124
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * will/for-next/perf: selftests: arm64: add test for unaligned/inexact watchpoint handling arm64: Allow hw watchpoint of length 3,5,6 and 7 arm64: hw_breakpoint: Handle inexact watchpoint addresses arm64: Allow hw watchpoint at varied offset from base address hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
| | * | | | arm64: Allow hw watchpoint of length 3,5,6 and 7Pratyush Anand2016-11-182-0/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since, arm64 can support all offset within a double word limit. Therefore, now support other lengths within that range as well. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | | arm64: hw_breakpoint: Handle inexact watchpoint addressesPavel Labath2016-11-181-27/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arm64 hardware does not always report a watchpoint hit address that matches one of the watchpoints set. It can also report an address "near" the watchpoint if a single instruction access both watched and unwatched addresses. There is no straight-forward way, short of disassembling the offending instruction, to map that address back to the watchpoint. Previously, when the hardware reported a watchpoint hit on an address that did not match our watchpoint (this happens in case of instructions which access large chunks of memory such as "stp") the process would enter a loop where we would be continually resuming it (because we did not recognise that watchpoint hit) and it would keep hitting the watchpoint again and again. The tracing process would never get notified of the watchpoint hit. This commit fixes the problem by looking at the watchpoints near the address reported by the hardware. If the address does not exactly match one of the watchpoints we have set, it attributes the hit to the nearest watchpoint we have. This heuristic is a bit dodgy, but I don't think we can do much more, given the hardware limitations. Signed-off-by: Pavel Labath <labath@google.com> [panand: reworked to rebase on his patches] Signed-off-by: Pratyush Anand <panand@redhat.com> [will: use __ffs instead of ffs - 1] Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | | arm64: Allow hw watchpoint at varied offset from base addressPratyush Anand2016-11-183-28/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM64 hardware supports watchpoint at any double word aligned address. However, it can select any consecutive bytes from offset 0 to 7 from that base address. For example, if base address is programmed as 0x420030 and byte select is 0x1C, then access of 0x420032,0x420033 and 0x420034 will generate a watchpoint exception. Currently, we do not have such modularity. We can only program byte, halfword, word and double word access exception from any base address. This patch adds support to overcome above limitations. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | | | arm64: head.S: Fix CNTHCTL_EL2 access on VHE systemJintack2016-11-291-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit. EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they are 11th and 10th bits respectively when E2H is set. Current code is unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set. In fact, we don't need to set those two bits, which allow EL1 and EL0 to access physical timer and counter respectively, if E2H and TGE are set for the host kernel. They will be configured later as necessary. First, we don't need to configure those bits for EL1, since the host kernel runs in EL2. It is a hypervisor's responsibility to configure them before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses are configured in the later stage of boot process. Signed-off-by: Jintack Lim <jintack@cs.columbia.edu> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | | arm64: Remove I-cache invalidation from flush_cache_range()Catalin Marinas2016-11-232-8/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The flush_cache_range() function (similarly for flush_cache_page()) is called when the kernel is changing an existing VA->PA mapping range to either a new PA or to different attributes. Since ARMv8 has PIPT-like D-caches, this function does not need to perform any D-cache maintenance. The I-cache maintenance is already handled via set_pte_at() and flush_cache_range() cannot anyway guarantee that there are no cache lines left after invalidation due to the speculative loads. This patch makes flush_cache_range() a no-op. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * | | | | arm64: Enable HIBERNATION in defconfigCatalin Marinas2016-11-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds CONFIG_HIBERNATION to the arm64 defconfig. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>