summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMarcelo Tosatti <mtosatti@redhat.com>2022-05-30 17:51:56 +0200
committerakpm <akpm@linux-foundation.org>2022-06-17 04:11:30 +0200
commit31733463372e8d88ea54bfa1e35178aad9b2ffd2 (patch)
tree68c4d69085bcd22c77badcafd1e3cb50e8a99411
parentmm/page_isolation.c: fix one kernel-doc comment (diff)
downloadlinux-31733463372e8d88ea54bfa1e35178aad9b2ffd2.tar.xz
linux-31733463372e8d88ea54bfa1e35178aad9b2ffd2.zip
mm: lru_cache_disable: use synchronize_rcu_expedited
commit ff042f4a9b050 ("mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu") replaced lru_cache_disable's usage of work queues with synchronize_rcu. Some users reported large performance regressions due to this commit, for example: https://lore.kernel.org/all/20220521234616.GO1790663@paulmck-ThinkPad-P17-Gen-1/T/ Switching to synchronize_rcu_expedited fixes the problem. Link: https://lkml.kernel.org/r/YpToHCmnx/HEcVyR@fuller.cnet Fixes: ff042f4a9b050 ("mm: lru_cache_disable: replace work queue synchronization with synchronize_rcu") Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> Tested-by: Michael Larabel <Michael@MichaelLarabel.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Nicolas Saenz Julienne <nsaenzju@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Minchan Kim <minchan@kernel.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Phil Elwell <phil@raspberrypi.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-rw-r--r--mm/swap.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/swap.c b/mm/swap.c
index f3922a96b2e9..034bb24879a3 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -881,7 +881,7 @@ void lru_cache_disable(void)
* lru_disable_count = 0 will have exited the critical
* section when synchronize_rcu() returns.
*/
- synchronize_rcu();
+ synchronize_rcu_expedited();
#ifdef CONFIG_SMP
__lru_add_drain_all(true);
#else