diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-01-07 20:04:55 +0100 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-03-21 17:56:35 +0100 |
commit | 59409373f60a0a493fe2a1b85dc8c6299c4fef37 (patch) | |
tree | 359beb8af1ae79fce424d7a2cb6be59280d700dd /mm/gup.c | |
parent | mm/gup: Remove an assumption of a contiguous memmap (diff) | |
download | linux-59409373f60a0a493fe2a1b85dc8c6299c4fef37.tar.xz linux-59409373f60a0a493fe2a1b85dc8c6299c4fef37.zip |
mm/gup: Handle page split race more efficiently
If we hit the page split race, the current code returns NULL which will
presumably trigger a retry under the mmap_lock. This isn't necessary;
we can just retry the compound_head() lookup. This is a very minor
optimisation of an unlikely path, but conceptually it matches (eg)
the page cache RCU-protected lookup.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Diffstat (limited to 'mm/gup.c')
-rw-r--r-- | mm/gup.c | 7 |
1 files changed, 5 insertions, 2 deletions
@@ -68,7 +68,10 @@ static void put_page_refs(struct page *page, int refs) */ static inline struct page *try_get_compound_head(struct page *page, int refs) { - struct page *head = compound_head(page); + struct page *head; + +retry: + head = compound_head(page); if (WARN_ON_ONCE(page_ref_count(head) < 0)) return NULL; @@ -86,7 +89,7 @@ static inline struct page *try_get_compound_head(struct page *page, int refs) */ if (unlikely(compound_head(page) != head)) { put_page_refs(head, refs); - return NULL; + goto retry; } return head; |