diff options
author | yangerkun <yangerkun@huawei.com> | 2020-03-04 08:25:56 +0100 |
---|---|---|
committer | Jeff Layton <jlayton@kernel.org> | 2020-03-06 17:54:13 +0100 |
commit | 6d390e4b5d48ec03bb87e63cf0a2bff5f4e116da (patch) | |
tree | ffacf4a699335f1caa8b2a2b6e322ad5542b1572 /fs/locks.c | |
parent | fcntl: Distribute switch variables for initialization (diff) | |
download | linux-6d390e4b5d48ec03bb87e63cf0a2bff5f4e116da.tar.xz linux-6d390e4b5d48ec03bb87e63cf0a2bff5f4e116da.zip |
locks: fix a potential use-after-free problem when wakeup a waiter
'16306a61d3b7 ("fs/locks: always delete_block after waiting.")' add the
logic to check waiter->fl_blocker without blocked_lock_lock. And it will
trigger a UAF when we try to wakeup some waiter:
Thread 1 has create a write flock a on file, and now thread 2 try to
unlock and delete flock a, thread 3 try to add flock b on the same file.
Thread2 Thread3
flock syscall(create flock b)
...flock_lock_inode_wait
flock_lock_inode(will insert
our fl_blocked_member list
to flock a's fl_blocked_requests)
sleep
flock syscall(unlock)
...flock_lock_inode_wait
locks_delete_lock_ctx
...__locks_wake_up_blocks
__locks_delete_blocks(
b->fl_blocker = NULL)
...
break by a signal
locks_delete_block
b->fl_blocker == NULL &&
list_empty(&b->fl_blocked_requests)
success, return directly
locks_free_lock b
wake_up(&b->fl_waiter)
trigger UAF
Fix it by remove this logic, and this patch may also fix CVE-2019-19769.
Cc: stable@vger.kernel.org
Fixes: 16306a61d3b7 ("fs/locks: always delete_block after waiting.")
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Jeff Layton <jlayton@kernel.org>
Diffstat (limited to 'fs/locks.c')
-rw-r--r-- | fs/locks.c | 14 |
1 files changed, 0 insertions, 14 deletions
diff --git a/fs/locks.c b/fs/locks.c index 44b6da032842..426b55d333d5 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -753,20 +753,6 @@ int locks_delete_block(struct file_lock *waiter) { int status = -ENOENT; - /* - * If fl_blocker is NULL, it won't be set again as this thread - * "owns" the lock and is the only one that might try to claim - * the lock. So it is safe to test fl_blocker locklessly. - * Also if fl_blocker is NULL, this waiter is not listed on - * fl_blocked_requests for some lock, so no other request can - * be added to the list of fl_blocked_requests for this - * request. So if fl_blocker is NULL, it is safe to - * locklessly check if fl_blocked_requests is empty. If both - * of these checks succeed, there is no need to take the lock. - */ - if (waiter->fl_blocker == NULL && - list_empty(&waiter->fl_blocked_requests)) - return status; spin_lock(&blocked_lock_lock); if (waiter->fl_blocker) status = 0; |