summaryrefslogtreecommitdiffstats
path: root/drivers/connector (unfollow)
Commit message (Collapse)AuthorFilesLines
2017-07-07libceph: resend on PG splits if OSD has RESEND_ON_SPLITIlya Dryomov3-11/+19
Note that ceph_osd_request_target fields are updated regardless of RESEND_ON_SPLIT. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: drop need_resend from calc_target()Ilya Dryomov1-7/+11
Replace it with more fine-grained bools to separate updating ceph_osd_request_target fields and the decision to resend. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: MOSDOp v8 encoding (actual spgid + full hash)Ilya Dryomov3-20/+154
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: ceph_connection_operations::reencode_message() methodIlya Dryomov2-2/+7
Give upper layers a chance to reencode the message after the connection is negotiated and ->peer_features is set. OSD client will use this to support both luminous and pre-luminous OSDs (in a single cluster): the former need MOSDOp v8; the latter will continue to be sent MOSDOp v4. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: encode_{pgid,oloc}() helpersIlya Dryomov1-23/+27
Factor out encode_{pgid,oloc}() and use ceph_encode_string() for oid. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: introduce ceph_spg, ceph_pg_to_primary_shard()Ilya Dryomov5-4/+60
Store both raw pgid and actual spgid in ceph_osd_request_target. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: new pi->last_force_request_resendIlya Dryomov1-0/+37
The old (v15) pi->last_force_request_resend has been repurposed to make pre-RESEND_ON_SPLIT clients that don't check for PG splits but do obey pi->last_force_request_resend resend on splits. See ceph.git commit 189ca7ec6420 ("mon/OSDMonitor: make pre-luminous clients resend ops on split"). Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: fold [l]req->last_force_resend into ceph_osd_request_targetIlya Dryomov2-13/+12
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: support SERVER_JEWEL feature bitsIlya Dryomov2-1/+9
Only MON_STATEFUL_SUB, really. MON_ROUTE_OSDMAP and OSDSUBOP_NO_SNAPCONTEXT are irrelevant. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: advertise support for OSD_POOLRESENDIlya Dryomov1-0/+1
The code has been in place since commit 63244fa123a7 ("libceph: introduce ceph_osd_request_target, calc_target()"), and, with the ceph_{oloc,oid}_copy() issue fixed in the previous commit, is now in working order. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: handle non-empty dest in ceph_{oloc,oid}_copy()Ilya Dryomov1-4/+6
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: new features macrosIlya Dryomov1-75/+167
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07libceph: remove ceph_sanitize_features() workaroundIlya Dryomov2-23/+1
Reflects ceph.git commit ff1959282826ae6acd7134e1b1ede74ffd1cc04a. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: update ceph_dentry_info::lease_session when necessaryYan, Zheng1-2/+7
Current code does not update ceph_dentry_info::lease_session once it is set. If auth mds of corresponding dentry changes, dentry lease keeps in an invalid state. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: new mount option that specifies fscache uniquifierYan, Zheng3-21/+113
Current ceph uses FSID as primary index key of fscache data. This allows ceph to retain cached data across remount. But this causes problem (kernel opps, fscache does not support sharing data) when a filesystem get mounted several times (with fscache enabled, with different mount options). The fix is adding a new mount option, which specifies uniquifier for fscache. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Acked-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: avoid accessing freeing inode in ceph_check_delayed_caps()Yan, Zheng1-2/+9
Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: avoid invalid memory dereference in the middle of umountYan, Zheng2-4/+6
extra_mon_dispatch() and debugfs' foo_show functions dereference fsc->mdsc. we should clean up fsc->client->extra_mon_dispatch and debugfs before destroying fsc->mds. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: getattr before read on ceph.* xattrsYan, Zheng1-0/+3
Previously we were returning values for quota, layout xattrs without any kind of update -- the user just got whatever happened to be in our cache. Clearly this extra round trip has a cost, but reads of these xattrs are fairly rare, happening on admin intervention rather than in normal operation. Link: http://tracker.ceph.com/issues/17939 Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: don't re-send interrupted flock requestYan, Zheng1-1/+24
Don't re-send interrupted flock request in cases of mds failover and receiving request forward. Because corresponding 'lock intr' request may have been finished, it won't get re-sent. Link: http://tracker.ceph.com/issues/20170 Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: cleanup writepage_nounlock()Yan, Zheng1-6/+6
Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: redirty page when writepage_nounlock() skips unwritable pageYan, Zheng1-1/+2
Ceph needs to flush dirty page in the order in which in which snap context they belong to. Dirty pages belong to older snap context should be flushed earlier. if writepage_nounlock() can not flush a page, it should redirty the page. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: remove useless page->mapping check in writepage_nounlock()Yan, Zheng1-4/+0
Callers of writepage_nounlock() have already ensured non-null page->mapping. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: update the 'approaching max_size' codeYan, Zheng5-11/+23
The old 'approaching max_size' code expects MDS set max_size to '2 * reported_size'. This is no longer true. The new code reports file size when half of previous max_size increment has been used. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-07ceph: re-request max size after importing capsYan, Zheng1-3/+8
The 'wanted max size' could be sent to inode's old auth mds, re-send it to inode's new auth mds if necessary. Otherwise write syscall may hang. Signed-off-by: "Yan, Zheng" <zyan@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2017-07-03Linux 4.12v4.12Linus Torvalds1-1/+1
2017-07-03moduleparam: fix doc: hwparam_irq configures an IRQSylvain 'ythier' Hitier1-1/+1
Signed-off-by: Sylvain 'ythier' Hitier <sylvain.hitier@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-30uapi/linux/a.out.h: don't use deprecated system-specific predefines.Zack Weinberg1-25/+1
uapi/linux/a.out.h uses a number of predefined macros that are deprecated because they're in the application namespace (e.g. '#ifdef linux' instead of '#ifdef __linux__'). This patch either corrects or just removes them if they are not applicable to Linux. The primary reason this is worth bothering to fix, considering how obsolete a.out binary support is, is that the GCC build process considers this such a severe error that it will copy the header into a private directory and change the macro names, which causes future updates to the header to be masked. This header probably doesn't get updated very often anymore, but it is the _only_ uapi header that gets this treatment, so IMHO it is worth patching just to drive that number all the way to zero. Signed-off-by: Zack Weinberg <zackw@panix.com> [hch: removed dead conditionals] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-30hashtable: remove repeated phrase from a commentJakub Kicinski1-1/+0
"in a rcu enabled hashtable" is repeated twice in a comment. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-30x86/intel_rdt: Fix memory leak on mount failureVikas Shivappa1-1/+3
If mount fails, the kn_info directory is not freed causing memory leak. Add the missing error handling path. Fixes: 4e978d06dedb ("x86/intel_rdt: Add "info" files to resctrl file system") Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: ravi.v.shankar@intel.com Cc: tony.luck@intel.com Cc: fenghua.yu@intel.com Cc: peterz@infradead.org Cc: vikas.shivappa@intel.com Cc: andi.kleen@intel.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1498503368-20173-3-git-send-email-vikas.shivappa@linux.intel.com
2017-06-30x86/boot/KASLR: Fix kexec crash due to 'virt_addr' calculation bugBaoquan He3-7/+2
Kernel text KASLR is separated into physical address and virtual address randomization. And for virtual address randomization, we only randomiza to get an offset between 16M and KERNEL_IMAGE_SIZE. So the initial value of 'virt_addr' should be LOAD_PHYSICAL_ADDR, but not the original kernel loading address 'output'. The bug will cause kernel boot failure if kernel is loaded at a different position than the address, 16M, which is decided at compiled time. Kexec/kdump is such practical case. To fix it, just assign LOAD_PHYSICAL_ADDR to virt_addr as initial value. Tested-by: Dave Young <dyoung@redhat.com> Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 8391c73 ("x86/KASLR: Randomize virtual address separately") Link: http://lkml.kernel.org/r/1498567146-11990-3-git-send-email-bhe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/boot/KASLR: Add checking for the offset of kernel virtual address ↵Baoquan He1-0/+2
randomization For kernel text KASLR, the virtual address is confined to area of 1G, [0xffffffff80000000, 0xffffffffc0000000). For the implemenataion of virtual address randomization, we only randomize to get an offset between 16M and 1G, then add this offset to the starting address, 0xffffffff80000000. Here 16M is the offset which is decided at linking stage. So the amount of the local variable 'virt_addr' which respresents the offset plus the kernel output size can not exceed KERNEL_IMAGE_SIZE. Add a debug check for the offset. If out of bounds, print error message and hang there. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1498567146-11990-2-git-send-email-bhe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30tracing/kprobes: Allow to create probe with a module name starting with a digitSabrina Dubroca1-9/+5
Always try to parse an address, since kstrtoul() will safely fail when given a symbol as input. If that fails (which will be the case for a symbol), try to parse a symbol instead. This allows creating a probe such as: p:probe/vlan_gro_receive 8021q:vlan_gro_receive+0 Which is necessary for this command to work: perf probe -m 8021q -a vlan_gro_receive Link: http://lkml.kernel.org/r/fd72d666f45b114e2c5b9cf7e27b91de1ec966f1.1498122881.git.sd@queasysnail.net Cc: stable@vger.kernel.org Fixes: 413d37d1e ("tracing: Add kprobe-based event tracer") Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-06-30MIPS: Avoid accidental raw backtraceJames Hogan1-0/+2
Since commit 81a76d7119f6 ("MIPS: Avoid using unwind_stack() with usermode") show_backtrace() invokes the raw backtracer when cp0_status & ST0_KSU indicates user mode to fix issues on EVA kernels where user and kernel address spaces overlap. However this is used by show_stack() which creates its own pt_regs on the stack and leaves cp0_status uninitialised in most of the code paths. This results in the non deterministic use of the raw back tracer depending on the previous stack content. show_stack() deals exclusively with kernel mode stacks anyway, so explicitly initialise regs.cp0_status to KSU_KERNEL (i.e. 0) to ensure we get a useful backtrace. Fixes: 81a76d7119f6 ("MIPS: Avoid using unwind_stack() with usermode") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/16656/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30MIPS: Perform post-DMA cache flushes on systems with MAARsPaul Burton1-5/+18
Recent CPUs from Imagination Technologies such as the I6400 or P6600 are able to speculatively fetch data from memory into caches. This means that if used in a system with non-coherent DMA they require that caches be invalidated after a device performs DMA, and before the CPU reads the DMA'd data, in order to ensure that stale values weren't speculatively prefetched. Such CPUs also introduced Memory Accessibility Attribute Registers (MAARs) in order to control the regions in which they are allowed to speculate. Thus we can use the presence of MAARs as a good indication that the CPU requires the above cache maintenance. Use the presence of MAARs to determine the result of cpu_needs_post_dma_flush() in the default case, in order to handle these recent CPUs correctly. Note that the return type of cpu_needs_post_dma_flush() is changed to bool, such that it's clearer what's happening when cpu_has_maar is cast to bool for the return value. If this patch were backported to a pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is currently 1ull<<30, so when truncated to an int gives a non-zero value anyway, but even so the implicit conversion from long long int to bool makes it clearer to understand what will happen than the implicit conversion from long long int to int would. The bool return type also fits this usage better semantically, so seems like an all-round win. Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting the return type change. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Cc: Ed Blake <ed.blake@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16363/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30MIPS: Fix IRQ tracing & lockdep when reschedulingPaul Burton1-0/+3
When the scheduler sets TIF_NEED_RESCHED & we call into the scheduler from arch/mips/kernel/entry.S we disable interrupts. This is true regardless of whether we reach work_resched from syscall_exit_work, resume_userspace or by looping after calling schedule(). Although we disable interrupts in these paths we don't call trace_hardirqs_off() before calling into C code which may acquire locks, and we therefore leave lockdep with an inconsistent view of whether interrupts are disabled or not when CONFIG_PROVE_LOCKING & CONFIG_DEBUG_LOCKDEP are both enabled. Without tracing this interrupt state lockdep will print warnings such as the following once a task returns from a syscall via syscall_exit_partial with TIF_NEED_RESCHED set: [ 49.927678] ------------[ cut here ]------------ [ 49.934445] WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:3687 check_flags.part.41+0x1dc/0x1e8 [ 49.946031] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled) [ 49.946355] CPU: 0 PID: 1 Comm: init Not tainted 4.10.0-00439-gc9fd5d362289-dirty #197 [ 49.963505] Stack : 0000000000000000 ffffffff81bb5d6a 0000000000000006 ffffffff801ce9c4 [ 49.974431] 0000000000000000 0000000000000000 0000000000000000 000000000000004a [ 49.985300] ffffffff80b7e487 ffffffff80a24498 a8000000ff160000 ffffffff80ede8b8 [ 49.996194] 0000000000000001 0000000000000000 0000000000000000 0000000077c8030c [ 50.007063] 000000007fd8a510 ffffffff801cd45c 0000000000000000 a8000000ff127c88 [ 50.017945] 0000000000000000 ffffffff801cf928 0000000000000001 ffffffff80a24498 [ 50.028827] 0000000000000000 0000000000000001 0000000000000000 0000000000000000 [ 50.039688] 0000000000000000 a8000000ff127bd0 0000000000000000 ffffffff805509bc [ 50.050575] 00000000140084e0 0000000000000000 0000000000000000 0000000000040a00 [ 50.061448] 0000000000000000 ffffffff8010e1b0 0000000000000000 ffffffff805509bc [ 50.072327] ... [ 50.076087] Call Trace: [ 50.079869] [<ffffffff8010e1b0>] show_stack+0x80/0xa8 [ 50.086577] [<ffffffff805509bc>] dump_stack+0x10c/0x190 [ 50.093498] [<ffffffff8015dde0>] __warn+0xf0/0x108 [ 50.099889] [<ffffffff8015de34>] warn_slowpath_fmt+0x3c/0x48 [ 50.107241] [<ffffffff801c15b4>] check_flags.part.41+0x1dc/0x1e8 [ 50.114961] [<ffffffff801c239c>] lock_is_held_type+0x8c/0xb0 [ 50.122291] [<ffffffff809461b8>] __schedule+0x8c0/0x10f8 [ 50.129221] [<ffffffff80946a60>] schedule+0x30/0x98 [ 50.135659] [<ffffffff80106278>] work_resched+0x8/0x34 [ 50.142397] ---[ end trace 0cb4f6ef5b99fe21 ]--- [ 50.148405] possible reason: unannotated irqs-off. [ 50.154600] irq event stamp: 400463 [ 50.159566] hardirqs last enabled at (400463): [<ffffffff8094edc8>] _raw_spin_unlock_irqrestore+0x40/0xa8 [ 50.171981] hardirqs last disabled at (400462): [<ffffffff8094eb98>] _raw_spin_lock_irqsave+0x30/0xb0 [ 50.183897] softirqs last enabled at (400450): [<ffffffff8016580c>] __do_softirq+0x4ac/0x6a8 [ 50.195015] softirqs last disabled at (400425): [<ffffffff80165e78>] irq_exit+0x110/0x128 Fix this by using the TRACE_IRQS_OFF macro to call trace_hardirqs_off() when CONFIG_TRACE_IRQFLAGS is enabled. This is done before invoking schedule() following the work_resched label because: 1) Interrupts are disabled regardless of the path we take to reach work_resched() & schedule(). 2) Performing the tracing here avoids the need to do it in paths which disable interrupts but don't call out to C code before hitting a path which uses the RESTORE_SOME macro that will call trace_hardirqs_on() or trace_hardirqs_off() as appropriate. We call trace_hardirqs_on() using the TRACE_IRQS_ON macro before calling syscall_trace_leave() for similar reasons, ensuring that lockdep has a consistent view of state after we re-enable interrupts. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Cc: linux-mips@linux-mips.org Cc: stable <stable@vger.kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/15385/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30MIPS: pm-cps: Drop manual cache-line alignment of ready_countPaul Burton1-8/+1
We allocate memory for a ready_count variable per-CPU, which is accessed via a cached non-coherent TLB mapping to perform synchronisation between threads within the core using LL/SC instructions. In order to ensure that the variable is contained within its own data cache line we allocate 2 lines worth of memory & align the resulting pointer to a line boundary. This is however unnecessary, since kmalloc is guaranteed to return memory which is at least cache-line aligned (see ARCH_DMA_MINALIGN). Stop the redundant manual alignment. Besides cleaning up the code & avoiding needless work, this has the side effect of avoiding an arithmetic error found by Bryan on 64 bit systems due to the 32 bit size of the former dlinesz. This led the ready_count variable to have its upper 32b cleared erroneously for MIPS64 kernels, causing problems when ready_count was later used on MIPS64 via cpuidle. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 3179d37ee1ed ("MIPS: pm-cps: add PM state entry code for CPS systems") Reported-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Tested-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable <stable@vger.kernel.org> # v3.16+ Patchwork: https://patchwork.linux-mips.org/patch/15383/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30ARM: 8685/1: ensure memblock-limit is pmd-alignedDoug Berger1-4/+4
The pmd containing memblock_limit is cleared by prepare_page_table() which creates the opportunity for early_alloc() to allocate unmapped memory if memblock_limit is not pmd aligned causing a boot-time hang. Commit 965278dcb8ab ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") attempted to resolve this problem, but there is a path through the adjust_lowmem_bounds() routine where if all memory regions start and end on pmd-aligned addresses the memblock_limit will be set to arm_lowmem_limit. Since arm_lowmem_limit can be affected by the vmalloc early parameter, the value of arm_lowmem_limit may not be pmd-aligned. This commit corrects this oversight such that memblock_limit is always rounded down to pmd-alignment. Fixes: 965278dcb8ab ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") Signed-off-by: Doug Berger <opendmb@gmail.com> Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-06-29sfc: fix attempt to translate invalid filter IDEdward Cree1-3/+4
When filter insertion fails with no rollback, we were trying to convert EFX_EF10_FILTER_ID_INVALID to an id to store in 'ids' (which is either vlan->uc or vlan->mc). This would WARN_ON_ONCE and then record a bogus filter ID of 0x1fff, neither of which is a good thing. Fixes: 0ccb998bf46d ("sfc: fix filter_id misinterpretation in edge case") Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()Michal Kubeček1-7/+17
Recently I started seeing warnings about pages with refcount -1. The problem was traced to packets being reused after their head was merged into a GRO packet by skb_gro_receive(). While bisecting the issue pointed to commit c21b48cc1bbf ("net: adjust skb->truesize in ___pskb_trim()") and I have never seen it on a kernel with it reverted, I believe the real problem appeared earlier when the option to merge head frag in GRO was implemented. Handling NAPI_GRO_FREE_STOLEN_HEAD state was only added to GRO_MERGED_FREE branch of napi_skb_finish() so that if the driver uses napi_gro_frags() and head is merged (which in my case happens after the skb_condense() call added by the commit mentioned above), the skb is reused including the head that has been merged. As a result, we release the page reference twice and eventually end up with negative page refcount. To fix the problem, handle NAPI_GRO_FREE_STOLEN_HEAD in napi_frags_finish() the same way it's done in napi_skb_finish(). Fixes: d7e8883cfcf4 ("net: make GRO aware of skb->head_frag") Signed-off-by: Michal Kubecek <mkubecek@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29bpf: prevent leaking pointer via xadd on unpriviledgedDaniel Borkmann2-0/+71
Leaking kernel addresses on unpriviledged is generally disallowed, for example, verifier rejects the following: 0: (b7) r0 = 0 1: (18) r2 = 0xffff897e82304400 3: (7b) *(u64 *)(r1 +48) = r2 R2 leaks addr into ctx Doing pointer arithmetic on them is also forbidden, so that they don't turn into unknown value and then get leaked out. However, there's xadd as a special case, where we don't check the src reg for being a pointer register, e.g. the following will pass: 0: (b7) r0 = 0 1: (7b) *(u64 *)(r1 +48) = r0 2: (18) r2 = 0xffff897e82304400 ; map 4: (db) lock *(u64 *)(r1 +48) += r2 5: (95) exit We could store the pointer into skb->cb, loose the type context, and then read it out from there again to leak it eventually out of a map value. Or more easily in a different variant, too: 0: (bf) r6 = r1 1: (7a) *(u64 *)(r10 -8) = 0 2: (bf) r2 = r10 3: (07) r2 += -8 4: (18) r1 = 0x0 6: (85) call bpf_map_lookup_elem#1 7: (15) if r0 == 0x0 goto pc+3 R0=map_value(ks=8,vs=8,id=0),min_value=0,max_value=0 R6=ctx R10=fp 8: (b7) r3 = 0 9: (7b) *(u64 *)(r0 +0) = r3 10: (db) lock *(u64 *)(r0 +0) += r6 11: (b7) r0 = 0 12: (95) exit from 7 to 11: R0=inv,min_value=0,max_value=0 R6=ctx R10=fp 11: (b7) r0 = 0 12: (95) exit Prevent this by checking xadd src reg for pointer types. Also add a couple of test cases related to this. Fixes: 1be7f75d1668 ("bpf: enable non-root eBPF programs") Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29perf/x86/intel/uncore: Fix wrong box pointer checkKan Liang1-1/+1
Should not init a NULL box. It will cause system crash. The issue looks like caused by a typo. This was not noticed because there is no NULL box. Also, for most boxes, they are enabled by default. The init code is not critical. Fixes: fff4b87e594a ("perf/x86/intel/uncore: Make package handling more robust") Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20170629190926.2456-1-kan.liang@intel.com
2017-06-29arcnet: com20020-pci: add missing pdev setup in netdev structureMichael Grzeschik1-0/+1
We add the pdev data to the pci devices netdev structure. This way the interface get consistent device names in the userspace (udev). Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29arcnet: com20020-pci: fix dev_id calculationMichael Grzeschik1-2/+3
The dev_id was miscalculated. Only the two bits 4-5 are relevant for the MA1 card. PCIARC1 and PCIFB2 use the four bits 4-7 for id selection. Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29arcnet: com20020: remove needless base_addr assignmentMichael Grzeschik1-2/+0
The assignment is superfluous. Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29Trivial fix to spelling mistake in arc_printk messageColin Ian King1-1/+1
Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29arcnet: change irq handler to lock irqsaveMichael Grzeschik1-3/+4
This patch prevents the arcnet driver from the following deadlock. [ 41.273910] ====================================================== [ 41.280397] [ INFO: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected ] [ 41.287433] 4.4.0-00034-gc0ae784 #536 Not tainted [ 41.292366] ------------------------------------------------------ [ 41.298863] arcecho/233 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire: [ 41.305628] (&(&lp->lock)->rlock){+.+...}, at: [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet] [ 41.315199] [ 41.315199] and this task is already holding: [ 41.321324] (_xmit_ARCNET#2){+.-...}, at: [<c06b934c>] packet_direct_xmit+0xfc/0x1c8 [ 41.329593] which would create a new lock dependency: [ 41.334893] (_xmit_ARCNET#2){+.-...} -> (&(&lp->lock)->rlock){+.+...} [ 41.341801] [ 41.341801] but this new dependency connects a SOFTIRQ-irq-safe lock: [ 41.350108] (_xmit_ARCNET#2){+.-...} ... which became SOFTIRQ-irq-safe at: [ 41.357539] [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.362677] [<c063ab8c>] dev_watchdog+0x5c/0x264 [ 41.367723] [<c0094edc>] call_timer_fn+0x6c/0xf4 [ 41.372759] [<c00950b8>] run_timer_softirq+0x154/0x210 [ 41.378340] [<c0036b30>] __do_softirq+0x144/0x298 [ 41.383469] [<c0036fb4>] irq_exit+0xcc/0x130 [ 41.388138] [<c0085c50>] __handle_domain_irq+0x60/0xb4 [ 41.393728] [<c0014578>] __irq_svc+0x58/0x78 [ 41.398402] [<c0010274>] arch_cpu_idle+0x24/0x3c [ 41.403443] [<c007127c>] cpu_startup_entry+0x1f8/0x25c [ 41.409029] [<c09adc90>] start_kernel+0x3c0/0x3cc [ 41.414170] [ 41.414170] to a SOFTIRQ-irq-unsafe lock: [ 41.419931] (&(&lp->lock)->rlock){+.+...} ... which became SOFTIRQ-irq-unsafe at: [ 41.427996] ... [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.433409] [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet] [ 41.439646] [<c0089120>] handle_nested_irq+0x8c/0xec [ 41.445063] [<c03c1170>] regmap_irq_thread+0x190/0x314 [ 41.450661] [<c0087244>] irq_thread_fn+0x1c/0x34 [ 41.455700] [<c0087548>] irq_thread+0x13c/0x1dc [ 41.460649] [<c0050f10>] kthread+0xe4/0xf8 [ 41.465158] [<c000f810>] ret_from_fork+0x14/0x24 [ 41.470207] [ 41.470207] other info that might help us debug this: [ 41.470207] [ 41.478627] Possible interrupt unsafe locking scenario: [ 41.478627] [ 41.485763] CPU0 CPU1 [ 41.490521] ---- ---- [ 41.495279] lock(&(&lp->lock)->rlock); [ 41.499414] local_irq_disable(); [ 41.505636] lock(_xmit_ARCNET#2); [ 41.511967] lock(&(&lp->lock)->rlock); [ 41.518741] <Interrupt> [ 41.521490] lock(_xmit_ARCNET#2); [ 41.525356] [ 41.525356] *** DEADLOCK *** [ 41.525356] [ 41.531587] 1 lock held by arcecho/233: [ 41.535617] #0: (_xmit_ARCNET#2){+.-...}, at: [<c06b934c>] packet_direct_xmit+0xfc/0x1c8 [ 41.544355] the dependencies between SOFTIRQ-irq-safe lock and the holding lock: [ 41.552362] -> (_xmit_ARCNET#2){+.-...} ops: 27 { [ 41.557357] HARDIRQ-ON-W at: [ 41.560664] [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.567445] [<c063ba28>] dev_deactivate_many+0x114/0x304 [ 41.574866] [<c063bc3c>] dev_deactivate+0x24/0x38 [ 41.581646] [<c0630374>] linkwatch_do_dev+0x40/0x74 [ 41.588613] [<c06305d8>] __linkwatch_run_queue+0xec/0x140 [ 41.596120] [<c0630658>] linkwatch_event+0x2c/0x34 [ 41.602991] [<c004af30>] process_one_work+0x188/0x40c [ 41.610131] [<c004b200>] worker_thread+0x4c/0x480 [ 41.616912] [<c0050f10>] kthread+0xe4/0xf8 [ 41.623048] [<c000f810>] ret_from_fork+0x14/0x24 [ 41.629735] IN-SOFTIRQ-W at: [ 41.633039] [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.639820] [<c063ab8c>] dev_watchdog+0x5c/0x264 [ 41.646508] [<c0094edc>] call_timer_fn+0x6c/0xf4 [ 41.653190] [<c00950b8>] run_timer_softirq+0x154/0x210 [ 41.660425] [<c0036b30>] __do_softirq+0x144/0x298 [ 41.667201] [<c0036fb4>] irq_exit+0xcc/0x130 [ 41.673518] [<c0085c50>] __handle_domain_irq+0x60/0xb4 [ 41.680754] [<c0014578>] __irq_svc+0x58/0x78 [ 41.687077] [<c0010274>] arch_cpu_idle+0x24/0x3c [ 41.693769] [<c007127c>] cpu_startup_entry+0x1f8/0x25c [ 41.701006] [<c09adc90>] start_kernel+0x3c0/0x3cc [ 41.707791] INITIAL USE at: [ 41.711003] [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.717696] [<c063ba28>] dev_deactivate_many+0x114/0x304 [ 41.725026] [<c063bc3c>] dev_deactivate+0x24/0x38 [ 41.731718] [<c0630374>] linkwatch_do_dev+0x40/0x74 [ 41.738593] [<c06305d8>] __linkwatch_run_queue+0xec/0x140 [ 41.746011] [<c0630658>] linkwatch_event+0x2c/0x34 [ 41.752789] [<c004af30>] process_one_work+0x188/0x40c [ 41.759847] [<c004b200>] worker_thread+0x4c/0x480 [ 41.766541] [<c0050f10>] kthread+0xe4/0xf8 [ 41.772596] [<c000f810>] ret_from_fork+0x14/0x24 [ 41.779198] } [ 41.780945] ... key at: [<c124d620>] netdev_xmit_lock_key+0x38/0x1c8 [ 41.788192] ... acquired at: [ 41.791309] [<c007bed8>] lock_acquire+0x70/0x90 [ 41.796361] [<c06f9140>] _raw_spin_lock_irqsave+0x40/0x54 [ 41.802324] [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet] [ 41.808844] [<c06b9380>] packet_direct_xmit+0x130/0x1c8 [ 41.814622] [<c06bc7e4>] packet_sendmsg+0x3b8/0x680 [ 41.820034] [<c05fe8b0>] sock_sendmsg+0x14/0x24 [ 41.825091] [<c05ffd68>] SyS_sendto+0xb8/0xe0 [ 41.829956] [<c05ffda8>] SyS_send+0x18/0x20 [ 41.834638] [<c000f780>] ret_fast_syscall+0x0/0x1c [ 41.839954] [ 41.841514] the dependencies between the lock to be acquired and SOFTIRQ-irq-unsafe lock: [ 41.850302] -> (&(&lp->lock)->rlock){+.+...} ops: 5 { [ 41.855644] HARDIRQ-ON-W at: [ 41.858945] [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.865726] [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet] [ 41.873607] [<c0089120>] handle_nested_irq+0x8c/0xec [ 41.880666] [<c03c1170>] regmap_irq_thread+0x190/0x314 [ 41.887901] [<c0087244>] irq_thread_fn+0x1c/0x34 [ 41.894593] [<c0087548>] irq_thread+0x13c/0x1dc [ 41.901195] [<c0050f10>] kthread+0xe4/0xf8 [ 41.907338] [<c000f810>] ret_from_fork+0x14/0x24 [ 41.914025] SOFTIRQ-ON-W at: [ 41.917328] [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.924106] [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet] [ 41.931981] [<c0089120>] handle_nested_irq+0x8c/0xec [ 41.939028] [<c03c1170>] regmap_irq_thread+0x190/0x314 [ 41.946264] [<c0087244>] irq_thread_fn+0x1c/0x34 [ 41.952954] [<c0087548>] irq_thread+0x13c/0x1dc [ 41.959548] [<c0050f10>] kthread+0xe4/0xf8 [ 41.965689] [<c000f810>] ret_from_fork+0x14/0x24 [ 41.972379] INITIAL USE at: [ 41.975595] [<c06f8fc8>] _raw_spin_lock+0x30/0x40 [ 41.982283] [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet] [ 41.990063] [<c0089120>] handle_nested_irq+0x8c/0xec [ 41.997027] [<c03c1170>] regmap_irq_thread+0x190/0x314 [ 42.004172] [<c0087244>] irq_thread_fn+0x1c/0x34 [ 42.010766] [<c0087548>] irq_thread+0x13c/0x1dc [ 42.017267] [<c0050f10>] kthread+0xe4/0xf8 [ 42.023314] [<c000f810>] ret_from_fork+0x14/0x24 [ 42.029903] } [ 42.031648] ... key at: [<bf0854cc>] __key.42091+0x0/0xfffff0f8 [arcnet] [ 42.039255] ... acquired at: [ 42.042372] [<c007bed8>] lock_acquire+0x70/0x90 [ 42.047413] [<c06f9140>] _raw_spin_lock_irqsave+0x40/0x54 [ 42.053364] [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet] [ 42.059872] [<c06b9380>] packet_direct_xmit+0x130/0x1c8 [ 42.065634] [<c06bc7e4>] packet_sendmsg+0x3b8/0x680 [ 42.071030] [<c05fe8b0>] sock_sendmsg+0x14/0x24 [ 42.076069] [<c05ffd68>] SyS_sendto+0xb8/0xe0 [ 42.080926] [<c05ffda8>] SyS_send+0x18/0x20 [ 42.085601] [<c000f780>] ret_fast_syscall+0x0/0x1c [ 42.090918] [ 42.092481] [ 42.092481] stack backtrace: [ 42.097065] CPU: 0 PID: 233 Comm: arcecho Not tainted 4.4.0-00034-gc0ae784 #536 [ 42.104751] Hardware name: Generic AM33XX (Flattened Device Tree) [ 42.111183] [<c0017ec8>] (unwind_backtrace) from [<c00139d0>] (show_stack+0x10/0x14) [ 42.119337] [<c00139d0>] (show_stack) from [<c02a82c4>] (dump_stack+0x8c/0x9c) [ 42.126937] [<c02a82c4>] (dump_stack) from [<c0078260>] (check_usage+0x4bc/0x63c) [ 42.134815] [<c0078260>] (check_usage) from [<c0078438>] (check_irq_usage+0x58/0xb0) [ 42.142964] [<c0078438>] (check_irq_usage) from [<c007aaa0>] (__lock_acquire+0x1524/0x20b0) [ 42.151740] [<c007aaa0>] (__lock_acquire) from [<c007bed8>] (lock_acquire+0x70/0x90) [ 42.159886] [<c007bed8>] (lock_acquire) from [<c06f9140>] (_raw_spin_lock_irqsave+0x40/0x54) [ 42.168768] [<c06f9140>] (_raw_spin_lock_irqsave) from [<bf083bc8>] (arcnet_send_packet+0x60/0x1c0 [arcnet]) [ 42.179115] [<bf083bc8>] (arcnet_send_packet [arcnet]) from [<c06b9380>] (packet_direct_xmit+0x130/0x1c8) [ 42.189182] [<c06b9380>] (packet_direct_xmit) from [<c06bc7e4>] (packet_sendmsg+0x3b8/0x680) [ 42.198059] [<c06bc7e4>] (packet_sendmsg) from [<c05fe8b0>] (sock_sendmsg+0x14/0x24) [ 42.206199] [<c05fe8b0>] (sock_sendmsg) from [<c05ffd68>] (SyS_sendto+0xb8/0xe0) [ 42.213978] [<c05ffd68>] (SyS_sendto) from [<c05ffda8>] (SyS_send+0x18/0x20) [ 42.221388] [<c05ffda8>] (SyS_send) from [<c000f780>] (ret_fast_syscall+0x0/0x1c) Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> --- v1 -> v2: removed unneeded zero assignment of flags Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29rocker: move dereference before freeDan Carpenter1-1/+1
My static checker complains that ofdpa_neigh_del() can sometimes free "found". It just makes sense to use it first before deleting it. Fixes: ecf244f753e0 ("rocker: fix maybe-uninitialized warning") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29mlxsw: spectrum_router: Fix NULL pointer dereferenceIdo Schimmel1-0/+3
In case a VLAN device is enslaved to a bridge we shouldn't create a router interface (RIF) for it when it's configured with an IP address. This is already handled by the driver for other types of netdevs, such as physical ports and LAG devices. If this IP address is then removed and the interface is subsequently unlinked from the bridge, a NULL pointer dereference can happen, as the original 802.1d FID was replaced with an rFID which was then deleted. To reproduce: $ ip link set dev enp3s0np9 up $ ip link add name enp3s0np9.111 link enp3s0np9 type vlan id 111 $ ip link set dev enp3s0np9.111 up $ ip link add name br0 type bridge $ ip link set dev br0 up $ ip link set enp3s0np9.111 master br0 $ ip address add dev enp3s0np9.111 192.168.0.1/24 $ ip address del dev enp3s0np9.111 192.168.0.1/24 $ ip link set dev enp3s0np9.111 nomaster Fixes: 99724c18fc66 ("mlxsw: spectrum: Introduce support for router interfaces") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reported-by: Petr Machata <petrm@mellanox.com> Tested-by: Petr Machata <petrm@mellanox.com> Reviewed-by: Petr Machata <petrm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29net: sched: Fix one possible panic when no destroy callbackGao Feng1-1/+2
When qdisc fail to init, qdisc_create would invoke the destroy callback to cleanup. But there is no check if the callback exists really. So it would cause the panic if there is no real destroy callback like the qdisc codel, fq, and so on. Take codel as an example following: When a malicious user constructs one invalid netlink msg, it would cause codel_init->codel_change->nla_parse_nested failed. Then kernel would invoke the destroy callback directly but qdisc codel doesn't define one. It causes one panic as a result. Now add one the check for destroy to avoid the possible panic. Fixes: 87b60cfacf9f ("net_sched: fix error recovery at qdisc creation") Signed-off-by: Gao Feng <gfree.wind@vip.163.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29virtio-net: serialize tx routine during resetJason Wang1-0/+1
We don't hold any tx lock when trying to disable TX during reset, this would lead a use after free since ndo_start_xmit() tries to access the virtqueue which has already been freed. Fix this by using netif_tx_disable() before freeing the vqs, this could make sure no tx after vq freeing. Reported-by: Jean-Philippe Menil <jpmenil@gmail.com> Tested-by: Jean-Philippe Menil <jpmenil@gmail.com> Fixes commit f600b6905015 ("virtio_net: Add XDP support") Cc: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Robert McCabe <robert.mccabe@rockwellcollins.com> Signed-off-by: David S. Miller <davem@davemloft.net>