summaryrefslogtreecommitdiffstats
path: root/security/smack
diff options
context:
space:
mode:
authorAhmed S. Darwish <a.darwish@linutronix.de>2020-08-27 13:40:38 +0200
committerPeter Zijlstra <peterz@infradead.org>2020-09-10 11:19:28 +0200
commit6446a5131e24a834606c15a965fa920041581c2c (patch)
tree25bef9e805252eb209ab620ff89a77fec916c66d /security/smack
parenttime/sched_clock: Use raw_read_seqcount_latch() during suspend (diff)
downloadlinux-6446a5131e24a834606c15a965fa920041581c2c.tar.xz
linux-6446a5131e24a834606c15a965fa920041581c2c.zip
mm/swap: Do not abuse the seqcount_t latching API
Commit eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls") implemented an optimization mechanism to exit the to-be-started LRU drain operation (name it A) if another drain operation *started and finished* while (A) was blocked on the LRU draining mutex. This was done through a seqcount_t latch, which is an abuse of its semantics: 1. seqcount_t latching should be used for the purpose of switching between two storage places with sequence protection to allow interruptible, preemptible, writer sections. The referenced optimization mechanism has absolutely nothing to do with that. 2. The used raw_write_seqcount_latch() has two SMP write memory barriers to insure one consistent storage place out of the two storage places available. A full memory barrier is required instead: to guarantee that the pagevec counter stores visible by local CPU are visible to other CPUs -- before loading the current drain generation. Beside the seqcount_t API abuse, the semantics of a latch sequence counter was force-fitted into the referenced optimization. What was meant is to track "generations" of LRU draining operations, where "global lru draining generation = x" implies that all generations 0 < n <= x are already *scheduled* for draining -- thus nothing needs to be done if the current generation number n <= x. Remove the conceptually-inappropriate seqcount_t latch usage. Manually implement the referenced optimization using a counter and SMP memory barriers. Note, while at it, use the non-atomic variant of cpumask_set_cpu(), __cpumask_set_cpu(), due to the already existing mutex protection. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/87y2pg9erj.fsf@vostro.fn.ogness.net
Diffstat (limited to 'security/smack')
0 files changed, 0 insertions, 0 deletions