summaryrefslogtreecommitdiffstats
path: root/fs/autofs4 (unfollow)
Commit message (Collapse)AuthorFilesLines
2013-05-31reiserfs: fix problems with chowning setuid file w/ xattrsJeff Mahoney2-1/+16
reiserfs_chown_xattrs() takes the iattr struct passed into ->setattr and uses it to iterate over all the attrs associated with a file to change ownership of xattrs (and transfer quota associated with the xattr files). When the setuid bit is cleared during chown, ATTR_MODE and iattr->ia_mode are passed to all the xattrs as well. This means that the xattr directory will have S_IFREG added to its mode bits. This has been prevented in practice by a missing IS_PRIVATE check in reiserfs_acl_chmod, which caused a double-lock to occur while holding the write lock. Since the file system was completely locked up, the writeout of the corrupted mode never happened. This patch temporarily clears everything but ATTR_UID|ATTR_GID for the calls to reiserfs_setattr and adds the missing IS_PRIVATE check. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Jan Kara <jack@suse.cz>
2013-05-31reiserfs: fix spurious multiple-fill in reiserfs_readdir_dentryJeff Mahoney1-0/+2
After sleeping for filldir(), we check to see if the file system has changed and research. The next_pos pointer is updated but its value isn't pushed into the key used for the search itself. As a result, the search returns the same item that the last cycle of the loop did and filldir() is called multiple times with the same data. The end result is that the buffer can contain the same name multiple times. This can be returned to userspace or used internally in the xattr code where it can manifest with the following warning: jdm-20004 reiserfs_delete_xattrs: Couldn't delete all xattrs (-2) reiserfs_for_each_xattr uses reiserfs_readdir_dentry to iterate over the xattr names and ends up trying to unlink the same name twice. The second attempt fails with -ENOENT and the error is returned. At some point I'll need to add support into reiserfsck to remove the orphaned directories left behind when this occurs. The fix is to push the value into the key before researching. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Jan Kara <jack@suse.cz>
2013-05-31arm64: don't kill the kernel on a bad esr from el0Mark Rutland1-3/+9
Rather than completely killing the kernel if we receive an esr value we can't deal with in the el0 handlers, send the process a SIGILL and log the esr value in the hope that we can debug it. If we receive a bad esr from el1, we'll die() as before. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org
2013-05-31arm64: treat unhandled compat el0 traps as undefMark Rutland1-0/+10
Currently, if a compat process reads or writes from/to a disabled cp15/cp14 register, the trap is not handled by the el0_sync_compat handler, and the kernel will head to bad_mode, where it will die(), and oops(). For 64 bit processes, disabled system register accesses are currently treated as unhandled instructions. This patch modifies entry.S to treat these unhandled traps as undefined instructions, sending a SIGILL to userspace. This gives processes a chance to handle this and stop using inaccessible registers, and prevents further issues in the kernel as a result of the die(). Reported-by: Johannes Jensen <Johannes.Jensen@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-05-31drm/qxl: fix build warnings on 32-bitDave Airlie2-6/+7
Just the usual printk related warnings. Signed-off-by: Dave Airlie <airlied@redhat.com>
2013-05-31x86: Allow FPU to be used at interrupt time even with eagerfpuPekka Riikonen1-9/+5
With the addition of eagerfpu the irq_fpu_usable() now returns false negatives especially in the case of ksoftirqd and interrupted idle task, two common cases for FPU use for example in networking/crypto. With eagerfpu=off FPU use is possible in those contexts. This is because of the eagerfpu check in interrupted_kernel_fpu_idle(): ... * For now, with eagerfpu we will return interrupted kernel FPU * state as not-idle. TBD: Ideally we can change the return value * to something like __thread_has_fpu(current). But we need to * be careful of doing __thread_clear_has_fpu() before saving * the FPU etc for supporting nested uses etc. For now, take * the simple route! ... if (use_eager_fpu()) return 0; As eagerfpu is automatically "on" on those CPUs that also have the features like AES-NI this patch changes the eagerfpu check to return 1 in case the kernel_fpu_begin() has not been said yet. Once it has been the __thread_has_fpu() will start returning 0. Notice that with eagerfpu the __thread_has_fpu is always true initially. FPU use is thus always possible no matter what task is under us, unless the state has already been saved with kernel_fpu_begin(). [ hpa: this is a performance regression, not a correctness regression, but since it can be quite serious on CPUs which need encryption at interrupt time I am marking this for urgent/stable. ] Signed-off-by: Pekka Riikonen <priikone@iki.fi> Link: http://lkml.kernel.org/r/alpine.GSO.2.00.1305131356320.18@git.silcnet.org Cc: <stable@vger.kernel.org> v3.7+ Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-05-31x86, crc32-pclmul: Fix build with older binutilsJan Beulich2-3/+73
binutils prior to 2.18 (e.g. the ones found on SLE10) don't support assembling PEXTRD, so a macro based approach like the one for PCLMULQDQ in the same file should be used. This requires making the helper macros capable of recognizing 32-bit general purpose register operands. [ hpa: tagging for stable as it is a low risk build fix ] Signed-off-by: Jan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/51A6142A02000078000D99D8@nat28.tlf.novell.com Cc: Alexander Boyko <alexander_boyko@xyratex.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Huang Ying <ying.huang@intel.com> Cc: <stable@vger.kernel.org> v3.9 Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-05-31xfs: rework remote attr CRCsDave Chinner4-158/+247
Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-31xfs: fully initialise temp leaf in xfs_attr3_leaf_compactDave Chinner1-16/+26
xfs_attr3_leaf_compact() uses a temporary buffer for compacting the the entries in a leaf. It copies the the original buffer into the temporary buffer, then zeros the original buffer completely. It then copies the entries back into the original buffer. However, the original buffer has not been correctly initialised, and so the movement of the entries goes horribly wrong. Make sure the zeroed destination buffer is fully initialised, and once we've set up the destination incore header appropriately, write is back to the buffer before starting to move entries around. While debugging this, the _d/_s prefixes weren't sufficient to remind me what buffer was what, so rename then all _src/_dst. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit d4c712bcf26a25c2b67c90e44e0b74c7993b5334)
2013-05-31xfs: fully initialise temp leaf in xfs_attr3_leaf_unbalanceDave Chinner1-3/+13
xfs_attr3_leaf_unbalance() uses a temporary buffer for recombining the entries in two leaves when the destination leaf requires compaction. The temporary buffer ends up being copied back over the original destination buffer, so the header in the temporary buffer needs to contain all the information that is in the destination buffer. To make sure the temporary buffer is fully initialised, once we've set up the temporary incore header appropriately, write is back to the temporary buffer before starting to move entries around. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 8517de2a81da830f5d90da66b4799f4040c76dc9)
2013-05-31xfs: correctly map remote attr buffers during removalDave Chinner1-9/+18
If we don't map the buffers correctly (same as for get/set operations) then the incore buffer lookup will fail. If a block number matches but a length is wrong, then debug kernels will ASSERT fail in _xfs_buf_find() due to the length mismatch. Ensure that we map the buffers correctly by basing the length of the buffer on the attribute data length rather than the remote block count. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 6863ef8449f1908c19f43db572e4474f24a1e9da)
2013-05-31xfs: remote attribute tail zeroing does too muchDave Chinner1-19/+18
When an attribute data does not fill then entire remote block, we zero the remaining part of the buffer. This, however, needs to take into account that the buffer has a header, and so the offset where zeroing starts and the length of zeroing need to take this into account. Otherwise we end up with zeros over the end of the attribute value when CRCs are enabled. While there, make sure we only ask to map an extent that covers the remaining range of the attribute, rather than asking every time for the full length of remote data. If the remote attribute blocks are contiguous with other parts of the attribute tree, it will map those blocks as well and we can potentially zero them incorrectly. We can also get buffer size mistmatches when trying to read or remove the remote attribute, and this can lead to not finding the correct buffer when looking it up in cache. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 4af3644c9a53eb2f1ecf69cc53576561b64be4c6)
2013-05-31xfs: remote attribute read too shortDave Chinner1-6/+9
Reading a maximally size remote attribute fails when CRCs are enabled with this verification error: XFS (vdb): remote attribute header does not match required off/len/owner) There are two reasons for this, the first being that the length of the buffer being read is determined from the args->rmtblkcnt which doesn't take into account CRC headers. Hence the mapped length ends up being too short and so we need to calculate it directly from the value length. The second is that the byte count of valid data within a buffer is capped by the length of the data and so doesn't take into account that the buffer might be longer due to headers. Hence we need to calculate the data space in the buffer first before calculating the actual byte count of data. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 913e96bc292e1bb248854686c79d6545ef3ee720)
2013-05-31xfs: remote attribute allocation may be contiguousDave Chinner1-1/+5
When CRCs are enabled, there may be multiple allocations made if the headers cause a length overflow. This, however, does not mean that the number of headers required increases, as the second and subsequent extents may be contiguous with the previous extent. Hence when we map the extents to write the attribute data, we may end up with less extents than allocations made. Hence the assertion that we consume the number of headers we calculated in the allocation loop is incorrect and needs to be removed. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 90253cf142469a40f89f989904abf0a1e500e1a6)
2013-05-31xfs: fix dir3 freespace block corruptionDave Chinner2-7/+7
When the directory freespace index grows to a second block (2017 4k data blocks in the directory), the initialisation of the second new block header goes wrong. The write verifier fires a corruption error indicating that the block number in the header is zero. This was being tripped by xfs/110. The problem is that the initialisation of the new block is done just fine in xfs_dir3_free_get_buf(), but the caller then users a dirv2 structure to zero on-disk header fields that xfs_dir3_free_get_buf() has already zeroed. These lined up with the block number in the dir v3 header format. While looking at this, I noticed that the struct xfs_dir3_free_hdr() had 4 bytes of padding in it that wasn't defined as padding or being zeroed by the initialisation. Add a pad field declaration and fully zero the on disk and in-core headers in xfs_dir3_free_get_buf() so that this is never an issue in the future. Note that this doesn't change the on-disk layout, just makes the 32 bits of padding in the layout explicit. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 5ae6e6a401957698f2bd8c9f4a86d86d02199fea)
2013-05-31xfs: disable swap extents ioctl on CRC enabled filesystemsDave Chinner1-0/+8
Currently, swapping extents from one inode to another is a simple act of switching data and attribute forks from one inode to another. This, unfortunately in no longer so simple with CRC enabled filesystems as there is owner information embedded into the BMBT blocks that are swapped between inodes. Hence swapping the forks between inodes results in the inodes having mapping blocks that point to the wrong owner and hence are considered corrupt. To fix this we need an extent tree block or record based swap algorithm so that the BMBT block owner information can be updated atomically in the swap transaction. This is a significant piece of new work, so for the moment simply don't allow swap extent operations to succeed on CRC enabled filesystems. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 02f75405a75eadfb072609f6bf839e027de6a29a)
2013-05-31xfs: add fsgeom flag for v5 superblock support.Dave Chinner2-1/+4
Currently userspace has no way of determining that a filesystem is CRC enabled. Add a flag to the XFS_IOC_FSGEOMETRY ioctl output to indicate that the filesystem has v5 superblock support enabled. This will allow xfs_info to correctly report the state of the filesystem. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 74137fff067961c9aca1e14d073805c3de8549bd)
2013-05-31xfs: fix incorrect remote symlink block countDave Chinner1-14/+6
When CRCs are enabled, the number of blocks needed to hold a remote symlink on a 1k block size filesystem may be 2 instead of 1. The transaction reservation for the allocated blocks was not taking this into account and only allocating one block. Hence when trying to read or invalidate such symlinks, we are mapping a hole where there should be a block and things go bad at that point. Fix the reservation to use the correct block count, clean up the block count calculation similar to the remote attribute calculation, and add a debug guard to detect when we don't write the entire symlink to disk. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 321a95839e65db3759a07a3655184b0283af90fe)
2013-05-31xfs: fix split buffer vector log recovery supportDave Chinner2-6/+12
A long time ago in a galaxy far away.... .. the was a commit made to fix some ilinux specific "fragmented buffer" log recovery problem: http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=commitdiff;h=b29c0bece51da72fb3ff3b61391a391ea54e1603 That problem occurred when a contiguous dirty region of a buffer was split across across two pages of an unmapped buffer. It's been a long time since that has been done in XFS, and the changes to log the entire inode buffers for CRC enabled filesystems has re-introduced that corner case. And, of course, it turns out that the above commit didn't actually fix anything - it just ensured that log recovery is guaranteed to fail when this situation occurs. And now for the gory details. xfstest xfs/085 is failing with this assert: XFS (vdb): bad number of regions (0) in inode log format XFS: Assertion failed: 0, file: fs/xfs/xfs_log_recover.c, line: 1583 Largely undocumented factoid #1: Log recovery depends on all log buffer format items starting with this format: struct foo_log_format { __uint16_t type; __uint16_t size; .... As recoery uses the size field and assumptions about 32 bit alignment in decoding format items. So don't pay much attention to the fact log recovery thinks that it decoding an inode log format item - it just uses them to determine what the size of the item is. But why would it see a log format item with a zero size? Well, luckily enough xfs_logprint uses the same code and gives the same error, so with a bit of gdb magic, it turns out that it isn't a log format that is being decoded. What logprint tells us is this: Oper (130): tid: a0375e1a len: 28 clientid: TRANS flags: none BUF: #regs: 2 start blkno: 144 (0x90) len: 16 bmap size: 2 flags: 0x4000 Oper (131): tid: a0375e1a len: 4096 clientid: TRANS flags: none BUF DATA ---------------------------------------------------------------------------- Oper (132): tid: a0375e1a len: 4096 clientid: TRANS flags: none xfs_logprint: unknown log operation type (4e49) ********************************************************************** * ERROR: data block=2 * ********************************************************************** That we've got a buffer format item (oper 130) that has two regions; the format item itself and one dirty region. The subsequent region after the buffer format item and it's data is them what we are tripping over, and the first bytes of it at an inode magic number. Not a log opheader like there is supposed to be. That means there's a problem with the buffer format item. It's dirty data region is 4096 bytes, and it contains - you guessed it - initialised inodes. But inode buffers are 8k, not 4k, and we log them in their entirety. So something is wrong here. The buffer format item contains: (gdb) p /x *(struct xfs_buf_log_format *)in_f $22 = {blf_type = 0x123c, blf_size = 0x2, blf_flags = 0x4000, blf_len = 0x10, blf_blkno = 0x90, blf_map_size = 0x2, blf_data_map = {0xffffffff, 0xffffffff, .... }} Two regions, and a signle dirty contiguous region of 64 bits. 64 * 128 = 8k, so this should be followed by a single 8k region of data. And the blf_flags tell us that the type of buffer is a XFS_BLFT_DINO_BUF. It contains inodes. And because it doesn't have the XFS_BLF_INODE_BUF flag set, that means it's an inode allocation buffer. So, it should be followed by 8k of inode data. But we know that the next region has a header of: (gdb) p /x *ohead $25 = {oh_tid = 0x1a5e37a0, oh_len = 0x100000, oh_clientid = 0x69, oh_flags = 0x0, oh_res2 = 0x0} and so be32_to_cpu(oh_len) = 0x1000 = 4096 bytes. It's simply not long enough to hold all the logged data. There must be another region. There is - there's a following opheader for another 4k of data that contains the other half of the inode cluster data - the one we assert fail on because it's not a log format header. So why is the second part of the data not being accounted to the correct buffer log format structure? It took a little more work with gdb to work out that the buffer log format structure was both expecting it to be there but hadn't accounted for it. It was at that point I went to the kernel code, as clearly this wasn't a bug in xfs_logprint and the kernel was writing bad stuff to the log. First port of call was the buffer item formatting code, and the discontiguous memory/contiguous dirty region handling code immediately stood out. I've wondered for a long time why the code had this comment in it: vecp->i_addr = xfs_buf_offset(bp, buffer_offset); vecp->i_len = nbits * XFS_BLF_CHUNK; vecp->i_type = XLOG_REG_TYPE_BCHUNK; /* * You would think we need to bump the nvecs here too, but we do not * this number is used by recovery, and it gets confused by the boundary * split here * nvecs++; */ vecp++; And it didn't account for the extra vector pointer. The case being handled here is that a contiguous dirty region lies across a boundary that cannot be memcpy()d across, and so has to be split into two separate operations for xlog_write() to perform. What this code assumes is that what is written to the log is two consecutive blocks of data that are accounted in the buf log format item as the same contiguous dirty region and so will get decoded as such by the log recovery code. The thing is, xlog_write() knows nothing about this, and so just does it's normal thing of adding an opheader for each vector. That means the 8k region gets written to the log as two separate regions of 4k each, but because nvecs has not been incremented, the buf log format item accounts for only one of them. Hence when we come to log recovery, we process the first 4k region and then expect to come across a new item that starts with a log format structure of some kind that tells us whenteh next data is going to be. Instead, we hit raw buffer data and things go bad real quick. So, the commit from 2002 that commented out nvecs++ is just plain wrong. It breaks log recovery completely, and it would seem the only reason this hasn't been since then is that we don't log large contigous regions of multi-page unmapped buffers very often. Never would be a closer estimate, at least until the CRC code came along.... So, lets fix that by restoring the nvecs accounting for the extra region when we hit this case..... .... and there's the problemin log recovery it is apparently working around: XFS: Assertion failed: i == item->ri_total, file: fs/xfs/xfs_log_recover.c, line: 2135 Yup, xlog_recover_do_reg_buffer() doesn't handle contigous dirty regions being broken up into multiple regions by the log formatting code. That's an easy fix, though - if the number of contiguous dirty bits exceeds the length of the region being copied out of the log, only account for the number of dirty bits that region covers, and then loop again and copy more from the next region. It's a 2 line fix. Now xfstests xfs/085 passes, we have one less piece of mystery code, and one more important piece of knowledge about how to structure new log format items.. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 709da6a61aaf12181a8eea8443919ae5fc1b731d)
2013-05-31xfs: kill suid/sgid through the truncate path.Dave Chinner1-15/+32
XFS has failed to kill suid/sgid bits correctly when truncating files of non-zero size since commit c4ed4243 ("xfs: split xfs_setattr") introduced in the 3.1 kernel. Fix it. Fix it. cc: stable kernel <stable@vger.kernel.org> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 56c19e89b38618390addfc743d822f99519055c6)
2013-05-31xfs: avoid nesting transactions in xfs_qm_scall_setqlim()Dave Chinner1-17/+23
Lockdep reports: ============================================= [ INFO: possible recursive locking detected ] 3.9.0+ #3 Not tainted --------------------------------------------- setquota/28368 is trying to acquire lock: (sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50 but task is already holding lock: (sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50 from xfs_qm_scall_setqlim()->xfs_dqread() when a dquot needs to be allocated. xfs_qm_scall_setqlim() is starting a transaction and then not passing it into xfs_qm_dqet() and so it starts it's own transaction when allocating the dquot. Splat! Fix this by not allocating the dquot in xfs_qm_scall_setqlim() inside the setqlim transaction. This requires getting the dquot first (and allocating it if necessary) then dropping and relocking the dquot before joining it to the setqlim transaction. Reported-by: Michael L. Semon <mlsemon35@gmail.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit f648167f3ac79018c210112508c732ea9bf67c7b)
2013-05-30MAINTAINERS: Framebuffer Layer maintainers updateJean-Christophe PLAGNIOL-VILLARD1-2/+3
Tomi and I will now take care of the Framebuffer Layer The git tree is now on kernel.org Signed-off-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Olof Johansson <olof@lixom.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de> Cc: linux-fbdev@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-30NFS: Fix security flavor negotiation with legacy binary mountsChuck Lever1-0/+2
Darrick J. Wong <darrick.wong@oracle.com> reports: > I have a kvm-based testing setup that netboots VMs over NFS, the > client end of which seems to have broken somehow in 3.10-rc1. The > server's exports file looks like this: > > /storage/mtr/x64 192.168.122.0/24(ro,sync,no_root_squash,no_subtree_check) > > On the client end (inside the VM), the initrd runs the following > command to try to mount the rootfs over NFS: > > # mount -o nolock -o ro -o retrans=10 192.168.122.1:/storage/mtr/x64/ /root > > (Note: This is the busybox mount command.) > > The mount fails with -EINVAL. Commit 4580a92d44 "NFS: Use server-recommended security flavor by default (NFSv3)" introduced a behavior regression for NFS mounts done via a legacy binary mount(2) call. Ensure that a default security flavor is specified for legacy binary mount requests, since they do not invoke nfs_select_flavor() in the kernel. Busybox uses klibc's nfsmount command, which performs NFS mounts using the legacy binary mount data format. /sbin/mount.nfs is not affected by this regression. Reported-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: Darrick J. Wong <darrick.wong@oracle.com> Acked-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2013-05-30aerdrv: Move cper_print_aer() call out of interrupt contextLance Ortiz5-24/+12
The following warning was seen on 3.9 when a corrected PCIe error was being handled by the AER subsystem. WARNING: at .../drivers/pci/search.c:214 pci_get_dev_by_id+0x8a/0x90() This occurred because a call to pci_get_domain_bus_and_slot() was added to cper_print_pcie() to setup for the call to cper_print_aer(). The warning showed up because cper_print_pcie() is called in an interrupt context and pci_get* functions are not supposed to be called in that context. The solution is to move the cper_print_aer() call out of the interrupt context and into aer_recover_work_func() to avoid any warnings when calling pci_get* functions. Signed-off-by: Lance Ortiz <lance.ortiz@hp.com> Acked-by: Borislav Petkov <bp@suse.de> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
2013-05-30MN10300: Need pci_iomap() and __pci_ioport_map() definingDavid Howells1-0/+2
Include the generic definitions of pci_iomap() and __pci_ioport_map() otherwise we can get errors like: lib/pci_iomap.c: In function 'pci_iomap': lib/pci_iomap.c:37: error: implicit declaration of function '__pci_ioport_map' lib/pci_iomap.c:37: warning: return makes pointer from integer without a cast and: drivers/pci/quirks.c: In function 'disable_igfx_irq': drivers/pci/quirks.c:2893: error: implicit declaration of function 'pci_iomap' drivers/pci/quirks.c:2893: warning: initialization makes pointer from integer without a cast drivers/pci/quirks.c: In function 'reset_ivb_igd': drivers/pci/quirks.c:3133: warning: assignment makes pointer from integer without a cast Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Ken Cox <jkc@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-30MN10300: ASB2305's PCI code needs the definition of XIRQ1David Howells1-0/+1
The code for PCI in the ASB2305 needs the definition of XIRQ1 from proc/irq.h otherwise the following error appears: arch/mn10300/unit-asb2305/pci.c: In function 'unit_pci_init': arch/mn10300/unit-asb2305/pci.c:481: error: 'XIRQ1' undeclared (first use in this function) arch/mn10300/unit-asb2305/pci.c:481: error: (Each undeclared identifier is reported only once arch/mn10300/unit-asb2305/pci.c:481: error: for each function it appears in.) Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Ken Cox <jkc@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-30MN10300: Enable IRQs more in system call exit work pathDavid Howells1-13/+5
Enable IRQs when calling schedule() for TIF_NEED_RESCHED and do_notify_resume(). If interrupts are enabled during do_notify_resume(), a warning can be seen (see lower down). Whilst we're at it, resume_userspace can be made local to entry.S as it is not called outside of there and it can be merged with the part of work_resched that occurs after schedule() is called. WARNING: at kernel/softirq.c:160 local_bh_enable+0x42/0xa0() Call Trace: local_bh_enable+0x42/0xa0 unix_release_sock+0x86/0x23c unix_release+0x20/0x28 sock_release+0x17/0x88 sock_close+0x20/0x28 __fput+0xc9/0x1fc ____fput+0xb/0x10 task_work_run+0x64/0x78 do_notify_resume+0x53d/0x544 work_notifysig+0xa/0xc Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Ken Cox <jkc@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-30MN10300: Fix ret_from_kernel_threadDavid Howells1-0/+1
ret_from_kernel_thread needs to set A2 to the thread_info pointer before jumping to syscall_exit. Without this, we never correctly start userspace. This was caused by the rejuggling of the fork/exec paths in commit ddf23e87a804 ("mn10300: switch to saner kernel_execve() semantics") Reported-by: Ken Cox <jkc@redhat.com> Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Ken Cox <jkc@redhat.com> Acked-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-29NFSv4: Fix a thinko in nfs4_try_open_cachedTrond Myklebust1-1/+1
We need to pass the full open mode flags to nfs_may_open() when doing a delegated open. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@vger.kernel.org
2013-05-29xenbus_client.c: correct exit path for xenbus_map_ring_valloc_hvmWei Liu1-2/+3
Apparently we should not free page that has not been allocated. This is b/c alloc_xenballooned_pages will take care of freeing the page on its own. Signed-off-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-05-29radeon: use max_bus_speed to activate gen2 speedsKleber Sacilotto de Souza3-21/+7
radeon currently uses a drm function to get the speed capabilities for the bus, drm_pcie_get_speed_cap_mask. However, this is a non-standard method of performing this detection and this patch changes it to use the max_bus_speed attribute. From: Lucas Kannebley Tavares <lucaskt@linux.vnet.ibm.com> Signed-off-by: Kleber Sacilotto de Souza <klebers@linux.vnet.ibm.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-05-29drm/radeon: narrow scope of Apple re-POST hackAlex Deucher1-1/+3
This narrows the scope of the apple re-POST hack added in: drm/radeon: re-POST the asic on Apple hardware when booted via EFI That patch prevents UVD from working on macs when booted in EFI mode. The original patch fixed macbook2,1 systems which were r5xx and hence have no UVD. Limit the hack to those systems to prevent UVD breakage on newer systems. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=63935 Cc: Matthew Garrett <mjg59@srcf.ucam.org> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Matthew Garrett <matthew.garrett@nebula.com>
2013-05-29drm/radeon: don't check crtcs in card_posted() on cards without DCEAlex Deucher1-0/+4
Skip checking crtcs in hardware without them. Avoids checking non-existent hardware. Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-05-29drm/radeon: fix card_posted check for newer asicsAlex Deucher1-10/+9
Newer asics have variable numbers of crtcs. Use that rather than the asic family to determine which crtcs to check. This avoids checking non-existent crtcs or missing crtcs on certain asics. Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-05-29drm/radeon: fix typo in cu_per_sh on verdeAlex Deucher1-1/+1
Should be 5 rather than 2. Noticed by sroland and glisse on IRC. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2013-05-29drm/radeon: UVD block on SUMO2 is the same as on SUMOChristian König1-3/+1
The chip id for SUMO2 isn't used. fixes: https://bugs.freedesktop.org/show_bug.cgi?id=63935 Tested-By: Dave Witbrodt <dawitbro@sbcglobal.net> Signed-off-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2013-05-29svcrpc: fix failures to handle -1 uid's and gid'sJ. Bruce Fields1-5/+7
As of f025adf191924e3a75ce80e130afcd2485b53bb8 "sunrpc: Properly decode kuids and kgids in RPC_AUTH_UNIX credentials" any rpc containing a -1 (0xffff) uid or gid would fail with a badcred error. Reported symptoms were xmbc clients failing on upgrade of the NFS server; examination of the network trace showed them sending -1 as the gid. Reported-by: Julian Sikorski <belegdol@gmail.com> Tested-by: Julian Sikorski <belegdol@gmail.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: stable@vger.kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2013-05-29xen-pciback: more uses of cached MSI-X capability offsetJan Beulich1-2/+2
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-05-29xen: Clean up apic ipi interfaceStefan Bader2-7/+4
Commit f447d56d36af18c5104ff29dcb1327c0c0ac3634 introduced the implementation of the PV apic ipi interface. But there were some odd things (it seems none of which cause really any issue but maybe they should be cleaned up anyway): - xen_send_IPI_mask_allbutself (and by that xen_send_IPI_allbutself) ignore the passed in vector and only use the CALL_FUNCTION_SINGLE vector. While xen_send_IPI_all and xen_send_IPI_mask use the vector. - physflat_send_IPI_allbutself is declared unnecessarily. It is never used. This patch tries to clean up those things. Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-05-29xenbus: save xenstore local status for later useAurelien Chartier3-15/+20
Save the xenstore local status computed in xenbus_init. It can then be used later to check if xenstored is running in this domain. Signed-off-by: Aurelien Chartier <aurelien.chartier@citrix.com> [Changes in v4: - Change variable name to xen_store_domain_type] Reviewed-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-05-29xenbus: delay xenbus frontend resume if xenstored is not runningAurelien Chartier2-1/+37
If the xenbus frontend is located in a domain running xenstored, the device resume is hanging because it is happening before the process resume. This patch adds extra logic to the resume code to check if we are the domain running xenstored and delay the resume if needed. Signed-off-by: Aurelien Chartier <aurelien.chartier@citrix.com> [Changes in v2: - Instead of bypassing the resume, process it in a workqueue] [Changes in v3: - Add a struct work in xenbus_device to avoid dynamic allocation - Several small code fixes] [Changes in v4: - Use a dedicated workqueue] [Changes in v5: - Move create_workqueue error handling to xenbus_frontend_dev_resume] Acked-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2013-05-29ARM: exynos: defconfig updateOlof Johansson1-8/+46
This turns on a number of configs that are useful on the Chromebook, but also good to have on in general: * USB host and MMC drivers(!) * I2C GPIO arbitration driver * CYAPA trackpad driver * simplefb * CROS EC and keyboard drivers * S5M8767 driver * MAX77686 drivers * MAX8997 driver * DEVTMPFS + mount * DM_CRYPT (as module) * CRYPTOLOOP * HIGHMEM * PRINTK timestamps This also turns off DEBUG_LL, and switches the hardcoded Samsung lowlevel uart to uart 3 (which is only used to show the "uncompressing kernel" message at boot, it seems). Signed-off-by: Olof Johansson <olof@lixom.net> Reviewed-by: Doug Anderson <dianders@chromium.org> Tested-by: Tushar Behera <tushar.behera@linaro.org> Acked-by: Kukjin Kim <kgene.kim@samsung.com>
2013-05-29x86-64, init: Fix a possible wraparound bug in switchover in head_64.SZhang Yanfei1-2/+4
In head_64.S, a switchover has been used to handle kernel crossing 1G, 512G boundaries. And commit 8170e6bed465b4b0c7687f93e9948aca4358a33b x86, 64bit: Use a #PF handler to materialize early mappings on demand said: During the switchover in head_64.S, before #PF handler is available, we use three pages to handle kernel crossing 1G, 512G boundaries with sharing page by playing games with page aliasing: the same page is mapped twice in the higher-level tables with appropriate wraparound. But from the switchover code, when we set up the PUD table: 114 addq $4096, %rdx 115 movq %rdi, %rax 116 shrq $PUD_SHIFT, %rax 117 andl $(PTRS_PER_PUD-1), %eax 118 movq %rdx, (4096+0)(%rbx,%rax,8) 119 movq %rdx, (4096+8)(%rbx,%rax,8) It seems line 119 has a potential bug there. For example, if the kernel is loaded at physical address 511G+1008M, that is 000000000 111111111 111111000 000000000000000000000 and the kernel _end is 512G+2M, that is 000000001 000000000 000000001 000000000000000000000 So in this example, when using the 2nd page to setup PUD (line 114~119), rax is 511. In line 118, we put rdx which is the address of the PMD page (the 3rd page) into entry 511 of the PUD table. But in line 119, the entry we calculate from (4096+8)(%rbx,%rax,8) has exceeded the PUD page. IMO, the entry in line 119 should be wraparound into entry 0 of the PUD table. The patch fixes the bug. Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Link: http://lkml.kernel.org/r/5191DE5A.3020302@cn.fujitsu.com Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: <stable@vger.kernel.org> v3.9 Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-05-28svcrpc: implement O_NONBLOCK behavior for use-gss-proxyJ. Bruce Fields1-2/+4
Somebody noticed LTP was complaining about O_NONBLOCK opens of /proc/net/rpc/use-gss-proxy succeeding and then a following read hanging. I'm not convinced LTP really has any business opening random proc files and expecting them to behave a certain way. Maybe this isn't really a bug. But in any case the O_NONBLOCK behavior could be useful for someone that wants to test whether gss-proxy is up without waiting. Reported-by: Jan Stancek <jstancek@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2013-05-28ring-buffer: Do not poll non allocated cpu buffersSteven Rostedt (Red Hat)1-0/+3
The tracing infrastructure sets up for possible CPUs, but it uses the ring buffer polling, it is possible to call the ring buffer polling code with a CPU that hasn't been allocated. This will cause a kernel oops when it access a ring buffer cpu buffer that is part of the possible cpus but hasn't been allocated yet as the CPU has never been online. Reported-by: Mauro Carvalho Chehab <mchehab@redhat.com> Tested-by: Mauro Carvalho Chehab <mchehab@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2013-05-28ASoC: cs42l52: fix default value for MASTERA_VOL.Nicolas Schichan1-1/+1
The default register value for MASTERA_VOL is 0x00, the same as MASTERB_VOL. Signed-off-by: Nicolas Schichan <nschichan@freebox.fr> Acked-by: Brian Austin <brian.austin@cirrus.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Cc: stable@vger.kernel.org
2013-05-28ASoC: wm8994: check for array index returnedVinod Koul1-0/+5
The array 'drc_cfg' of size 3 may use index value -22 (EINVAL) The array 'retune_mobile_cfg' of size 3 may use index value -22 (EINVAL) Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-05-28ASoC: wm8994: Fix reporting of accessory removal on WM8958Mark Brown1-0/+5
During recent refactoring the code to report removal when MICDET reports an absent microphone was removed, causing problems for systems which rely solely on the MICDET for this functionality. Restore it. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-05-28ASoC: wm8994: use the correct pointer to get the control valueVinod Koul1-1/+1
Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
2013-05-28xmem/tmem: fix 'undefined variable' build error.Frederico Cadete1-0/+2
In the (not so useful) kernel configuration where CONFIG_SWAP is undefined and CONFIG_XEN_SELFBALLOONING is defined, xen_tmem_init would use undefined variable 'static bool frontswap'. Added #else to have #define frontswap (0) in the case where CONFIG_FRONTSWAP is not defined. Signed-off-by: Frederico Cadete <frederico@cadete.eu> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>