summaryrefslogtreecommitdiffstats
path: root/kernel/rcu (follow)
Commit message (Collapse)AuthorAgeFilesLines
*-----. Merge branches 'torture.2014.11.03a', 'cpu.2014.11.03a', 'doc.2014.11.13a', ↵Paul E. McKenney2014-11-136-107/+130
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'fixes.2014.11.13a', 'signal.2014.10.29a' and 'rt.2014.10.29a' into HEAD cpu.2014.11.03a: Changes for per-CPU variables. doc.2014.11.13a: Documentation updates. fixes.2014.11.13a: Miscellaneous fixes. signal.2014.10.29a: Signal changes. rt.2014.10.29a: Real-time changes. torture.2014.11.03a: torture-test changes.
| | | | * rcu: Fix for rcuo online-time-creation reorganization bugPaul E. McKenney2014-10-291-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 35ce7f29a44a (rcu: Create rcuo kthreads only for onlined CPUs) contains checks for the case where CPUs are brought online out of order, re-wiring the rcuo leader-follower relationships as needed. Unfortunately, this rewiring was broken. This apparently went undetected due to the tendency of systems to bring CPUs online in order. This commit nevertheless fixes the rewiring. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | | * rcu: Kick rcuo kthreads after their CPU goes offlinePaul E. McKenney2014-10-291-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a no-CBs CPU were to post an RCU callback with interrupts disabled after it entered the idle loop for the last time, there might be no deferred wakeup for the corresponding rcuo kthreads. This commit therefore adds a set of calls to do_nocb_deferred_wakeup() after the CPU has gone completely offline. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | | * rcu: Remove redundant TREE_PREEMPT_RCU config optionPranith Kumar2014-10-294-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PREEMPT_RCU and TREE_PREEMPT_RCU serve the same function after TINY_PREEMPT_RCU has been removed. This patch removes TREE_PREEMPT_RCU and uses PREEMPT_RCU config option in its place. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | | * rcu: Unify boost and kthread prioritiesClark Williams2014-10-291-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename CONFIG_RCU_BOOST_PRIO to CONFIG_RCU_KTHREAD_PRIO and use this value for both the per-CPU kthreads (rcuc/N) and the rcu boosting threads (rcub/n). Also, create the module_parameter rcutree.kthread_prio to be used on the kernel command line at boot to set a new value (rcutree.kthread_prio=N). Signed-off-by: Clark Williams <clark.williams@gmail.com> [ paulmck: Ported to rcu/dev, applied Paul Bolle and Peter Zijlstra feedback. ] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | | * rcu: Avoid IPIing idle CPUs from synchronize_sched_expedited()Paul E. McKenney2014-10-281-1/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, synchronize_sched_expedited() sends IPIs to all online CPUs, even those that are idle or executing in nohz_full= userspace. Because idle CPUs and nohz_full= userspace CPUs are in extended quiescent states, there is no need to IPI them in the first place. This commit therefore avoids IPIing CPUs that are already in extended quiescent states. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | | * rcu: Move RCU_BOOST variable declarations, eliminating #ifdefPaul E. McKenney2014-10-282-15/+15
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are some RCU_BOOST-specific per-CPU variable declarations that are needlessly defined under #ifdef in kernel/rcu/tree.c. This commit therefore moves these declarations into a pre-existing #ifdef in kernel/rcu/tree_plugin.h. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcu: Fix FIXME in rcu_tasks_kthread()Paul E. McKenney2014-11-131-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit affines rcu_tasks_kthread() to the housekeeping CPUs in CONFIG_NO_HZ_FULL builds. This is just a default, so systems administrators are free to put this kthread somewhere else if they wish. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcu: Remove CONFIG_RCU_CPU_STALL_VERBOSEPaul E. McKenney2014-10-281-13/+0
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CONFIG_RCU_CPU_STALL_VERBOSE Kconfig parameter causes preemptible RCU's CPU stall warnings to dump out any preempted tasks that are blocking the current RCU grace period. This information is useful, and the default has been CONFIG_RCU_CPU_STALL_VERBOSE=y for some years. It is therefore time for this commit to remove this Kconfig parameter, so that future kernel builds will always act as if CONFIG_RCU_CPU_STALL_VERBOSE=y. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Remove "cpu" argument to rcu_cleanup_after_idle()Paul E. McKenney2014-11-043-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "cpu" argument to rcu_cleanup_after_idle() is always the current CPU, so drop it. This moves the smp_processor_id() from the caller to rcu_cleanup_after_idle(), saving argument-passing overhead. Again, the anticipated cross-CPU uses of these functions has been replaced by NO_HZ_FULL. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Remove "cpu" argument to rcu_prepare_for_idle()Paul E. McKenney2014-11-043-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "cpu" argument to rcu_prepare_for_idle() is always the current CPU, so drop it. This in turn allows two of the uses of "cpu" in this function to be replaced with a this_cpu_ptr() and the third by smp_processor_id(), replacing that of the call to rcu_prepare_for_idle(). Again, the anticipated cross-CPU uses of these functions has been replaced by NO_HZ_FULL. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Remove "cpu" argument to rcu_needs_cpu()Paul E. McKenney2014-11-042-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "cpu" argument to rcu_needs_cpu() is always the current CPU, so drop it. This in turn allows the "cpu" argument to rcu_cpu_has_callbacks() to be removed, which allows the uses of "cpu" in both functions to be replaced with a this_cpu_ptr(). Again, the anticipated cross-CPU uses of these functions has been replaced by NO_HZ_FULL. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Remove "cpu" argument to rcu_note_context_switch()Paul E. McKenney2014-11-043-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "cpu" argument to rcu_note_context_switch() is always the current CPU, so drop it. This in turn allows the "cpu" argument to rcu_preempt_note_context_switch() to be removed, which allows the sole use of "cpu" in both functions to be replaced with a this_cpu_ptr(). Again, the anticipated cross-CPU uses of these functions has been replaced by NO_HZ_FULL. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Remove "cpu" argument to rcu_preempt_check_callbacks()Paul E. McKenney2014-11-043-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Because rcu_preempt_check_callbacks()'s argument is guaranteed to always be the current CPU, drop the argument and replace per_cpu() with __this_cpu_read(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Remove "cpu" argument to rcu_pending()Paul E. McKenney2014-11-041-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | Because rcu_pending()'s argument is guaranteed to always be the current CPU, drop the argument and replace per_cpu_ptr() with this_cpu_ptr(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Remove "cpu" argument to rcu_check_callbacks()Paul E. McKenney2014-11-042-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "cpu" argument was kept around on the off-chance that RCU might offload scheduler-clock interrupts. However, this offload approach has been replaced by NO_HZ_FULL, which offloads -all- RCU processing from qualifying CPUs. It is therefore time to remove the "cpu" argument to rcu_check_callbacks(), which this commit does. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Use DEFINE_PER_CPU_SHARED_ALIGNED for rcu_dataPaul E. McKenney2014-11-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rcu_data per-CPU variable has a number of fields that are atomically manipulated, potentially by any CPU. This situation can result in false sharing with per-CPU variables that have the misfortune of being allocated adjacent to rcu_data in memory. This commit therefore changes the DEFINE_PER_CPU() to DEFINE_PER_CPU_SHARED_ALIGNED() in order to avoid this false sharing. Reported-by: Christoph Lameter <cl@linux.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Christoph Lameter <cl@linux.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| * | rcu: Remove rcu_dynticks * parameters when they are always ↵Christoph Lameter2014-11-043-18/+22
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | this_cpu_ptr(&rcu_dynticks) For some functions in kernel/rcu/tree* the rdtp parameter is always this_cpu_ptr(rdtp). Remove the parameter if constant and calculate the pointer in function. This will have the advantage that it is obvious that the address are all per cpu offsets and thus it will enable the use of this_cpu_ops in the future. Signed-off-by: Christoph Lameter <cl@linux.com> [ paulmck: Forward-ported to rcu/dev, whitespace adjustment. ] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
* | rcutorture: Fix rcu_torture_cbflood() memory leakPaul E. McKenney2014-11-041-0/+1
| | | | | | | | | | | | | | | | | | Commit 38706bc5a29a (rcutorture: Add callback-flood test) vmalloc()ed a bunch of RCU callbacks, but failed to free them. This commit fixes that oversight. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
* | rcutorture: Add early boot self testsPranith Kumar2014-11-044-1/+91
|/ | | | | | | | | | Add early boot self tests for RCU under CONFIG_PROVE_RCU. Currently the only test is adding a dummy callback which increments a counter which we then later verify after calling rcu_barrier*(). Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Make rcu_barrier() understand about missing rcuo kthreadsPaul E. McKenney2014-10-283-5/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 35ce7f29a44a (rcu: Create rcuo kthreads only for onlined CPUs) avoids creating rcuo kthreads for CPUs that never come online. This fixes a bug in many instances of firmware: Instead of lying about their age, these systems instead lie about the number of CPUs that they have. Before commit 35ce7f29a44a, this could result in huge numbers of useless rcuo kthreads being created. It appears that experience indicates that I should have told the people suffering from this problem to fix their broken firmware, but I instead produced what turned out to be a partial fix. The missing piece supplied by this commit makes sure that rcu_barrier() knows not to post callbacks for no-CBs CPUs that have not yet come online, because otherwise rcu_barrier() will hang on systems having firmware that lies about the number of CPUs. It is tempting to simply have rcu_barrier() refuse to post a callback on any no-CBs CPU that does not have an rcuo kthread. This unfortunately does not work because rcu_barrier() is required to wait for all pending callbacks. It is therefore required to wait even for those callbacks that cannot possibly be invoked. Even if doing so hangs the system. Given that posting a callback to a no-CBs CPU that does not yet have an rcuo kthread can hang rcu_barrier(), It is tempting to report an error in this case. Unfortunately, this will result in false positives at boot time, when it is perfectly legal to post callbacks to the boot CPU before the scheduler has started, in other words, before it is legal to invoke rcu_barrier(). So this commit instead has rcu_barrier() avoid posting callbacks to CPUs having neither rcuo kthread nor pending callbacks, and has it complain bitterly if it finds CPUs having no rcuo kthread but some pending callbacks. And when rcu_barrier() does find CPUs having no rcuo kthread but pending callbacks, as noted earlier, it has no choice but to hang indefinitely. Reported-by: Yanko Kaneti <yaneti@declera.com> Reported-by: Jay Vosburgh <jay.vosburgh@canonical.com> Reported-by: Meelis Roos <mroos@linux.ee> Reported-by: Eric B Munson <emunson@akamai.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Eric B Munson <emunson@akamai.com> Tested-by: Jay Vosburgh <jay.vosburgh@canonical.com> Tested-by: Yanko Kaneti <yaneti@declera.com> Tested-by: Kevin Fenzi <kevin@scrye.com> Tested-by: Meelis Roos <mroos@linux.ee>
* rcu: Eliminate deadlock between CPU hotplug and expedited grace periodsPaul E. McKenney2014-09-192-13/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the expedited grace-period primitives do get_online_cpus(). This greatly simplifies their implementation, but means that calls to them holding locks that are acquired by CPU-hotplug notifiers (to say nothing of calls to these primitives from CPU-hotplug notifiers) can deadlock. But this is starting to become inconvenient, as can be seen here: https://lkml.org/lkml/2014/8/5/754. The problem in this case is that some developers need to acquire a mutex from a CPU-hotplug notifier, but also need to hold it across a synchronize_rcu_expedited(). As noted above, this currently results in deadlock. This commit avoids the deadlock and retains the simplicity by creating a try_get_online_cpus(), which returns false if the get_online_cpus() reference count could not immediately be incremented. If a call to try_get_online_cpus() returns true, the expedited primitives operate as before. If a call returns false, the expedited primitives fall back to normal grace-period operations. This falling back of course results in increased grace-period latency, but only during times when CPU hotplug operations are actually in flight. The effect should therefore be negligible during normal operation. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Tested-by: Lan Tianyu <tianyu.lan@intel.com>
* rcutorture: Rename rcutorture_runnable parameterPaul E. McKenney2014-09-161-4/+4
| | | | | | | | This commit changes rcutorture_runnable to torture_runnable, which is consistent with the names of the other parameters and is a bit shorter as well. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* torture: Address race in module cleanupDavidlohr Bueso2014-09-161-1/+2
| | | | | | | | | | | | | | | | | | When performing module cleanups by calling torture_cleanup() the 'torture_type' string in nullified However, callers are not necessarily done, and might still need to reference the variable. This impacts both rcutorture and locktorture, causing printing things like: [ 94.226618] (null)-torture: Stopping lock_torture_writer task [ 94.226624] (null)-torture: Stopping lock_torture_stats task Thus delay this operation until the very end of the cleanup process. The consequence (which shouldn't matter for this kid of program) is, of course, that we delay the window between rmmod and modprobing, for instance in module_torture_begin(). Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* Merge branch 'rcu-tasks.2014.09.10a' into HEADPaul E. McKenney2014-09-166-61/+447
|\ | | | | | | rcu-tasks.2014.09.10a: Add RCU-tasks flavor of RCU.
| * rcu: Per-CPU operation cleanups to rcu_*_qs() functionsPaul E. McKenney2014-09-083-33/+38
| | | | | | | | | | | | | | | | | | | | | | | | The rcu_bh_qs(), rcu_preempt_qs(), and rcu_sched_qs() functions use old-style per-CPU variable access and write to ->passed_quiesce even if it is already set. This commit therefore updates to use the new-style per-CPU variable access functions and avoids the spurious writes. This commit also eliminates the "cpu" argument to these functions because they are always invoked on the indicated CPU. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Remove local_irq_disable() in rcu_preempt_note_context_switch()Paul E. McKenney2014-09-082-18/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rcu_preempt_note_context_switch() function is on a scheduling fast path, so it would be good to avoid disabling irqs. The reason that irqs are disabled is to synchronize process-level and irq-handler access to the task_struct ->rcu_read_unlock_special bitmask. This commit therefore makes ->rcu_read_unlock_special instead be a union of bools with a short allowing single-access checks in RCU's __rcu_read_unlock(). This results in the process-level and irq-handler accesses being simple loads and stores, so that irqs need no longer be disabled. This commit therefore removes the irq disabling from rcu_preempt_note_context_switch(). Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Additional information on RCU-tasks stall-warning messagesPaul E. McKenney2014-09-081-0/+9
| | | | | | | | Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Make rcu_tasks_kthread()'s GP-wait loop allow preemptionPaul E. McKenney2014-09-081-5/+6
| | | | | | | | | | | | | | | | | | | | The grace-period-wait loop in rcu_tasks_kthread() is under (unnecessary) RCU protection, and therefore has no preemption points in a PREEMPT=n kernel. This commit therefore removes the RCU protection and inserts cond_resched(). Reported-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Make TASKS_RCU handle nohz_full= CPUsPaul E. McKenney2014-09-084-1/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently TASKS_RCU would ignore a CPU running a task in nohz_full= usermode execution. There would be neither a context switch nor a scheduling-clock interrupt to tell TASKS_RCU that the task in question had passed through a quiescent state. The grace period would therefore extend indefinitely. This commit therefore makes RCU's dyntick-idle subsystem record the task_struct structure of the task that is running in dyntick-idle mode on each CPU. The TASKS_RCU grace period can then access this information and record a quiescent state on behalf of any CPU running in dyntick-idle usermode. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Defer rcu_tasks_kthread() creation till first call_rcu_tasks()Paul E. McKenney2014-09-081-7/+26
| | | | | | | | | | | | | | | | | | | | | | | | It is expected that many sites will have CONFIG_TASKS_RCU=y, but will never actually invoke call_rcu_tasks(). For such sites, creating rcu_tasks_kthread() at boot is wasteful. This commit therefore defers creation of this kthread until the time of the first call_rcu_tasks(). This of course means that the first call_rcu_tasks() must be invoked from process context after the scheduler is fully operational. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Improve RCU-tasks energy efficiencyPaul E. McKenney2014-09-081-2/+12
| | | | | | | | | | | | | | | | | | The current RCU-tasks implementation uses strict polling to detect callback arrivals. This works quite well, but is not so good for energy efficiency. This commit therefore replaces the strict polling with a wait queue. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Add stall-warning checks for RCU-tasksPaul E. McKenney2014-09-081-4/+25
| | | | | | | | | | | | | | | | | | | | | | This commit adds a ten-minute RCU-tasks stall warning. The actual time is controlled by the boot/sysfs parameter rcu_task_stall_timeout, with values less than or equal to zero disabling the stall warnings. The default value is ten minutes, which means that the tasks that have not yet responded will get their stacks dumped every ten minutes, until they pass through a voluntary context switch. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcutorture: Add torture tests for RCU-tasksPaul E. McKenney2014-09-081-1/+49
| | | | | | | | | | | | | | | | This commit adds torture tests for RCU-tasks. It also fixes a bug that would segfault for an RCU flavor lacking a callback-barrier function. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
| * rcu: Export RCU-tasks APIs to GPL modulesSteven Rostedt2014-09-081-0/+2
| | | | | | | | | | | | | | | | | | | | This commit exports the RCU-tasks synchronous APIs, synchronize_rcu_tasks() and rcu_barrier_tasks(), to GPL-licensed kernel modules. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
| * rcu: Make TASKS_RCU handle tasks that are almost done exitingPaul E. McKenney2014-09-081-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once a task has passed exit_notify() in the do_exit() code path, it is no longer on the task lists, and is therefore no longer visible to rcu_tasks_kthread(). This means that an almost-exited task might be preempted while within a trampoline, and this task won't be waited on by rcu_tasks_kthread(). This commit fixes this bug by adding an srcu_struct. An exiting task does srcu_read_lock() just before calling exit_notify(), and does the corresponding srcu_read_unlock() after doing the final preempt_disable(). This means that rcu_tasks_kthread() can do synchronize_srcu() to wait for all mostly-exited tasks to reach their final preempt_disable() region, and then use synchronize_sched() to wait for those tasks to finish exiting. Reported-by: Oleg Nesterov <oleg@redhat.com> Suggested-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Add synchronous grace-period waiting for RCU-tasksPaul E. McKenney2014-09-081-0/+55
| | | | | | | | | | | | | | | | | | It turns out to be easier to add the synchronous grace-period waiting functions to RCU-tasks than to work around their absense in rcutorture, so this commit adds them. The key point is that the existence of call_rcu_tasks() means that rcutorture needs an rcu_barrier_tasks(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Provide cond_resched_rcu_qs() to force quiescent states in long loopsPaul E. McKenney2014-09-083-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RCU-tasks requires the occasional voluntary context switch from CPU-bound in-kernel tasks. In some cases, this requires instrumenting cond_resched(). However, there is some reluctance to countenance unconditionally instrumenting cond_resched() (see http://lwn.net/Articles/603252/), so this commit creates a separate cond_resched_rcu_qs() that may be used in place of cond_resched() in locations prone to long-duration in-kernel looping. This commit currently instruments only RCU-tasks. Future possibilities include also instrumenting RCU, RCU-bh, and RCU-sched in order to reduce IPI usage. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Add call_rcu_tasks()Paul E. McKenney2014-09-083-0/+175
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds a new RCU-tasks flavor of RCU, which provides call_rcu_tasks(). This RCU flavor's quiescent states are voluntary context switch (not preemption!) and userspace execution (not the idle loop -- use some sort of schedule_on_each_cpu() if you need to handle the idle tasks. Note that unlike other RCU flavors, these quiescent states occur in tasks, not necessarily CPUs. Includes fixes from Steven Rostedt. This RCU flavor is assumed to have very infrequent latency-tolerant updaters. This assumption permits significant simplifications, including a single global callback list protected by a single global lock, along with a single task-private linked list containing all tasks that have not yet passed through a quiescent state. If experience shows this assumption to be incorrect, the required additional complexity will be added. Suggested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| |
| \
| \
| \
*---. \ Merge branches 'doc.2014.09.07a', 'fixes.2014.09.10a', ↵Paul E. McKenney2014-09-166-191/+440
|\ \ \ \ | |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | 'nocb-nohz.2014.09.16b' and 'torture.2014.09.07a' into HEAD doc.2014.09.07a: Documentation updates. fixes.2014.09.10a: Miscellaneous fixes. nocb-nohz.2014.09.16b: No-CBs CPUs and NO_HZ_FULL updates. torture.2014.09.07a: Torture-test updates.
| | | * rcutorture: Add callback-flood testPaul E. McKenney2014-09-081-1/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although RCU is designed to handle arbitrary floods of callbacks, this capability is not routinely tested. This commit therefore adds a cbflood capability in which kthreads repeatedly registers large numbers of callbacks. One such kthread is created for each four CPUs (rounding up), and the test may be controlled by several cbflood_* kernel boot parameters, which control the number of bursts per flood, the number of callbacks per burst, the time between bursts, and the time between floods. The default values are large enough to exercise RCU's emergency responses to callback flooding. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: David Miller <davem@davemloft.net> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
| | | * rcu: Use pr_alert/pr_cont for printing logsJoe Perches2014-09-081-71/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | User pr_alert/pr_cont for printing the logs from rcutorture module directly instead of writing it to a buffer and then printing it. This allows us from not having to allocate such buffers. Also remove a resulting empty function. I tested this using the parse-torture.sh script as follows: $ dmesg | grep torture > log.txt $ bash parse-torture.sh log.txt test $ There were no warnings which means that parsing went fine. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Fix a sparse warning by marking boost_mutex staticPranith Kumar2014-09-081-1/+1
| |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes the following sparse warning by marking boost_mutex static: kernel/rcu/rcutorture.c:185:1: warning: symbol 'boost_mutex' was not declared. Should it be static? Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
| | * rcu: Avoid misordering in nocb_leader_wait()Paul E. McKenney2014-09-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The NOCB follower wakeup ordering depends on the store to the tail pointer happening before the wakeup. However, because atomic_long_add() does not return a value, it does not provide ordering guarantees, and the locking in wake_up() only guarantees that the store will happen before the unlock, which might be too late. Even though this is only a theoretical issue, this commit adds a smp_mb__after_atomic() after the final atomic_long_add() to provide the needed ordering guarantee. Reported-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| | * rcu: Handle NOCB callbacks from irq-disabled idle codePaul E. McKenney2014-09-161-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an RCU callback is queued on a no-CBs CPU from idle code with irqs disabled, and if that CPU stays idle forever after, the callback will never be invoked. This commit therefore adds a check for this situation in ____call_rcu_nocb(), invoking the RCU core solely for the purpose of the ensuing return-to-idle transition. (If the CPU doesn't return to idle, the next scheduling-clock interrupt will fix things up.) Reported-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| | * rcu: Avoid misordering in __call_rcu_nocb_enqueue()Paul E. McKenney2014-09-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The NOCB leader wakeup ordering depends on the store to the header happening before the check for the leader already being awake. However, because atomic_long_add() does not return a value, it does not provide ordering guarantees, the incorrect comment in wake_nocb_leader() notwithstanding. This commit therefore adds a smp_mb__after_atomic() after the final atomic_long_add() to provide the needed ordering guarantee. Reported-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| | * rcu: Don't track sysidle state if no nohz_full= CPUsPaul E. McKenney2014-09-161-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there are no nohz_full= CPUs, then there is currently no reason to track sysidle state. This commit therefore short-circuits this state tracking if !tick_nohz_full_enabled(). Note that these checks will need to be revisited if nohz_full= state can ever be changed at runtime. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| | * rcu: Eliminate redundant rcu_sysidle_state variablePaul E. McKenney2014-09-161-17/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have rcu_state_p, which references rcu_preempt_state for TREE_PREEMPT_RCU and rcu_sched_state for TREE_RCU, we don't need a separate rcu_sysidle_state variable. This commit therefore eliminates rcu_preempt_state in favor of rcu_state_p. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| | * rcu: Check for have_rcu_nocb_mask instead of rcu_nocb_maskPranith Kumar2014-09-161-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we configure a kernel with CONFIG_NOCB_CPU=y, CONFIG_RCU_NOCB_CPU_NONE=y and CONFIG_CPUMASK_OFFSTACK=n and do not pass in a rcu_nocb= boot parameter, the cpumask rcu_nocb_mask can be garbage instead of NULL. Hence this commit replaces checks for rcu_nocb_mask == NULL with a check for have_rcu_nocb_mask. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| | * rcu: Create rcuo kthreads only for onlined CPUsPaul E. McKenney2014-09-163-13/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RCU currently uses for_each_possible_cpu() to spawn rcuo kthreads, which can result in more rcuo kthreads than one would expect, for example, derRichard reported 64 CPUs worth of rcuo kthreads on an 8-CPU image. This commit therefore creates rcuo kthreads only for those CPUs that actually come online. This was reported by derRichard on the OFTC IRC network. Reported-by: Richard Weinberger <richard@nod.at> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>