diff options
author | Brian Foster <bfoster@redhat.com> | 2015-07-29 03:51:01 +0200 |
---|---|---|
committer | Dave Chinner <david@fromorbit.com> | 2015-07-29 03:51:01 +0200 |
commit | 4703da7b78776140477a023c99683d3be84b7fca (patch) | |
tree | f24d13dde915a288b5c31a1aa7518d90d6429b5e /fs/xfs/xfs_log_cil.c | |
parent | xfs: xfs_bunmapi() does not need XFS_BMAPI_METADATA flag (diff) | |
download | linux-4703da7b78776140477a023c99683d3be84b7fca.tar.xz linux-4703da7b78776140477a023c99683d3be84b7fca.zip |
xfs: close xc_cil list_empty() races with cil commit sequence
We have seen somewhat rare reports of the following assert from
xlog_cil_push_background() failing during ltp tests or somewhat
innocuous desktop root fs workloads (e.g., virt operations, initramfs
construction):
ASSERT(!list_empty(&cil->xc_cil));
The reasoning behind the assert is that the transaction has inserted
items to the CIL and hit background push codepath all with
cil->xc_ctx_lock held for reading. This locks out background commit from
emptying the CIL, which acquires the lock for writing. Therefore, the
reasoning is that the items previously inserted in the CIL should still
be present.
The cil->xc_ctx_lock read lock is not sufficient to protect the xc_cil
list, however, due to how CIL insertion is handled.
xlog_cil_insert_items() inserts and reorders the dirty transaction items
to the tail of the CIL under xc_cil_lock. It uses list_move_tail() to
achieve insertion and reordering in the same block of code. This
function removes and reinserts an item to the tail of the list. If a
transaction commits an item that was already logged and thus already
resides in the CIL, and said item is the sole item on the list, the
removal and reinsertion creates a temporary state where the list is
actually empty.
This state is not valid and thus should never be observed by concurrent
transaction commit-side checks in the circumstances outlined above. We
do not want to acquire the xc_cil_lock in all of these instances as it
was previously removed and replaced with a separate push lock for
performance reasons. Therefore, close any races with list_empty() on the
insertion side by ensuring that the list is never in a transient empty
state.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Diffstat (limited to 'fs/xfs/xfs_log_cil.c')
-rw-r--r-- | fs/xfs/xfs_log_cil.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index abc2ccbff739..4e7649351f5a 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -307,7 +307,13 @@ xlog_cil_insert_items( if (!(lidp->lid_flags & XFS_LID_DIRTY)) continue; - list_move_tail(&lip->li_cil, &cil->xc_cil); + /* + * Only move the item if it isn't already at the tail. This is + * to prevent a transient list_empty() state when reinserting + * an item that is already the only item in the CIL. + */ + if (!list_is_last(&lip->li_cil, &cil->xc_cil)) + list_move_tail(&lip->li_cil, &cil->xc_cil); } /* account for space used by new iovec headers */ |