diff options
author | Trond Myklebust <trond.myklebust@hammerspace.com> | 2019-08-03 16:11:27 +0200 |
---|---|---|
committer | Trond Myklebust <trond.myklebust@hammerspace.com> | 2019-08-05 04:35:40 +0200 |
commit | c77e22834ae9a11891cb613bd9a551be1b94f2bc (patch) | |
tree | dfa88210f8f4660ce07e597ba5dda83e6aee73af /fs/bfs/inode.c | |
parent | NFSv4: Check the return value of update_open_stateid() (diff) | |
download | linux-c77e22834ae9a11891cb613bd9a551be1b94f2bc.tar.xz linux-c77e22834ae9a11891cb613bd9a551be1b94f2bc.zip |
NFSv4: Fix a potential sleep while atomic in nfs4_do_reclaim()
John Hubbard reports seeing the following stack trace:
nfs4_do_reclaim
rcu_read_lock /* we are now in_atomic() and must not sleep */
nfs4_purge_state_owners
nfs4_free_state_owner
nfs4_destroy_seqid_counter
rpc_destroy_wait_queue
cancel_delayed_work_sync
__cancel_work_timer
__flush_work
start_flush_work
might_sleep:
(kernel/workqueue.c:2975: BUG)
The solution is to separate out the freeing of the state owners
from nfs4_purge_state_owners(), and perform that outside the atomic
context.
Reported-by: John Hubbard <jhubbard@nvidia.com>
Fixes: 0aaaf5c424c7f ("NFS: Cache state owners after files are closed")
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Diffstat (limited to 'fs/bfs/inode.c')
0 files changed, 0 insertions, 0 deletions