diff options
author | Mike Rapoport <rppt@linux.vnet.ibm.com> | 2017-09-07 01:22:56 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-09-07 02:27:28 +0200 |
commit | b1cc94ab2f2ba31fcb2c59df0b9cf03f6d720553 (patch) | |
tree | 3894b1c311654938ea6525af4eb975c695e1966b /mm/shmem.c | |
parent | mm, THP, swap: add THP swapping out fallback counting (diff) | |
download | linux-b1cc94ab2f2ba31fcb2c59df0b9cf03f6d720553.tar.xz linux-b1cc94ab2f2ba31fcb2c59df0b9cf03f6d720553.zip |
shmem: shmem_charge: verify max_block is not exceeded before inode update
Patch series "userfaultfd: enable zeropage support for shmem".
These patches enable support for UFFDIO_ZEROPAGE for shared memory.
The first two patches are not strictly related to userfaultfd, they are
just minor refactoring to reduce amount of code duplication.
This patch (of 7):
Currently we update inode and shmem_inode_info before verifying that
used_blocks will not exceed max_blocks. In case it will, we undo the
update. Let's switch the order and move the verification of the blocks
count before the inode and shmem_inode_info update.
Link: http://lkml.kernel.org/r/1497939652-16528-2-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/shmem.c')
-rw-r--r-- | mm/shmem.c | 25 |
1 files changed, 12 insertions, 13 deletions
diff --git a/mm/shmem.c b/mm/shmem.c index fbcb3c96a186..35b524085c44 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -266,6 +266,14 @@ bool shmem_charge(struct inode *inode, long pages) if (shmem_acct_block(info->flags, pages)) return false; + + if (sbinfo->max_blocks) { + if (percpu_counter_compare(&sbinfo->used_blocks, + sbinfo->max_blocks - pages) > 0) + goto unacct; + percpu_counter_add(&sbinfo->used_blocks, pages); + } + spin_lock_irqsave(&info->lock, flags); info->alloced += pages; inode->i_blocks += pages * BLOCKS_PER_PAGE; @@ -273,20 +281,11 @@ bool shmem_charge(struct inode *inode, long pages) spin_unlock_irqrestore(&info->lock, flags); inode->i_mapping->nrpages += pages; - if (!sbinfo->max_blocks) - return true; - if (percpu_counter_compare(&sbinfo->used_blocks, - sbinfo->max_blocks - pages) > 0) { - inode->i_mapping->nrpages -= pages; - spin_lock_irqsave(&info->lock, flags); - info->alloced -= pages; - shmem_recalc_inode(inode); - spin_unlock_irqrestore(&info->lock, flags); - shmem_unacct_blocks(info->flags, pages); - return false; - } - percpu_counter_add(&sbinfo->used_blocks, pages); return true; + +unacct: + shmem_unacct_blocks(info->flags, pages); + return false; } void shmem_uncharge(struct inode *inode, long pages) |