diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2019-11-18 14:28:24 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2020-03-28 13:21:08 +0100 |
commit | f1e67e355c2aafeddf1eac31335709236996d2fe (patch) | |
tree | 6405dfe9656d09114a8aec1c87cffe170fc6dec9 /fs/ntfs/aops.c | |
parent | thermal/x86_pkg_temp: Make pkg_temp_lock a raw_spinlock_t (diff) | |
download | linux-f1e67e355c2aafeddf1eac31335709236996d2fe.tar.xz linux-f1e67e355c2aafeddf1eac31335709236996d2fe.zip |
fs/buffer: Make BH_Uptodate_Lock bit_spin_lock a regular spinlock_t
Bit spinlocks are problematic if PREEMPT_RT is enabled, because they
disable preemption, which is undesired for latency reasons and breaks when
regular spinlocks are taken within the bit_spinlock locked region because
regular spinlocks are converted to 'sleeping spinlocks' on RT.
PREEMPT_RT replaced the bit spinlocks with regular spinlocks to avoid this
problem. The replacement was done conditionaly at compile time, but
Christoph requested to do an unconditional conversion.
Jan suggested to move the spinlock into a existing padding hole which
avoids a size increase of struct buffer_head on production kernels.
As a benefit the lock gains lockdep coverage.
[ bigeasy: Remove the wrapper and use always spinlock_t and move it into
the padding hole ]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Christoph Hellwig <hch@infradead.org>
Link: https://lkml.kernel.org/r/20191118132824.rclhrbujqh4b4g4d@linutronix.de
Diffstat (limited to 'fs/ntfs/aops.c')
-rw-r--r-- | fs/ntfs/aops.c | 9 |
1 files changed, 3 insertions, 6 deletions
diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c index 7202a1e39d70..554b744f41bf 100644 --- a/fs/ntfs/aops.c +++ b/fs/ntfs/aops.c @@ -92,8 +92,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate) "0x%llx.", (unsigned long long)bh->b_blocknr); } first = page_buffers(page); - local_irq_save(flags); - bit_spin_lock(BH_Uptodate_Lock, &first->b_state); + spin_lock_irqsave(&first->b_uptodate_lock, flags); clear_buffer_async_read(bh); unlock_buffer(bh); tmp = bh; @@ -108,8 +107,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate) } tmp = tmp->b_this_page; } while (tmp != bh); - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); + spin_unlock_irqrestore(&first->b_uptodate_lock, flags); /* * If none of the buffers had errors then we can set the page uptodate, * but we first have to perform the post read mst fixups, if the @@ -142,8 +140,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate) unlock_page(page); return; still_busy: - bit_spin_unlock(BH_Uptodate_Lock, &first->b_state); - local_irq_restore(flags); + spin_unlock_irqrestore(&first->b_uptodate_lock, flags); return; } |