diff options
author | Benjamin LaHaise <bcrl@kvack.org> | 2013-09-09 17:57:59 +0200 |
---|---|---|
committer | Benjamin LaHaise <bcrl@kvack.org> | 2013-09-09 17:57:59 +0200 |
commit | d6c355c7dabcd753a75bc77d150d36328a355267 (patch) | |
tree | 97b30abf03e5758fca4eef8572de38b77af54ae8 /fs/aio.c | |
parent | aio: fix rcu sparse warnings introduced by ioctx table lookup patch (diff) | |
download | linux-d6c355c7dabcd753a75bc77d150d36328a355267.tar.xz linux-d6c355c7dabcd753a75bc77d150d36328a355267.zip |
aio: fix race in ring buffer page lookup introduced by page migration support
Prior to the introduction of page migration support in "fs/aio: Add support
to aio ring pages migration" / 36bc08cc01709b4a9bb563b35aa530241ddc63e3,
mapping of the ring buffer pages was done via get_user_pages() while
retaining mmap_sem held for write. This avoided possible races with userland
racing an munmap() or mremap(). The page migration patch, however, switched
to using mm_populate() to prime the page mapping. mm_populate() cannot be
called with mmap_sem held.
Instead of dropping the mmap_sem, revert to the old behaviour and simply
drop the use of mm_populate() since get_user_pages() will cause the pages to
get mapped anyways. Thanks to Al Viro for spotting this issue.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Diffstat (limited to 'fs/aio.c')
-rw-r--r-- | fs/aio.c | 15 |
1 files changed, 12 insertions, 3 deletions
@@ -307,16 +307,25 @@ static int aio_setup_ring(struct kioctx *ctx) aio_free_ring(ctx); return -EAGAIN; } - up_write(&mm->mmap_sem); - - mm_populate(ctx->mmap_base, populate); pr_debug("mmap address: 0x%08lx\n", ctx->mmap_base); + + /* We must do this while still holding mmap_sem for write, as we + * need to be protected against userspace attempting to mremap() + * or munmap() the ring buffer. + */ ctx->nr_pages = get_user_pages(current, mm, ctx->mmap_base, nr_pages, 1, 0, ctx->ring_pages, NULL); + + /* Dropping the reference here is safe as the page cache will hold + * onto the pages for us. It is also required so that page migration + * can unmap the pages and get the right reference count. + */ for (i = 0; i < ctx->nr_pages; i++) put_page(ctx->ring_pages[i]); + up_write(&mm->mmap_sem); + if (unlikely(ctx->nr_pages != nr_pages)) { aio_free_ring(ctx); return -EAGAIN; |