diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2020-12-08 06:07:31 +0100 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2021-09-27 15:27:30 +0200 |
commit | af7f29d9e1a7bda1429923327421367b69aa2e70 (patch) | |
tree | 1414a3f8d7cd7a58a1f677fa7f2d760d48f2bc47 /include | |
parent | mm/filemap: Add folio_lock() (diff) | |
download | linux-af7f29d9e1a7bda1429923327421367b69aa2e70.tar.xz linux-af7f29d9e1a7bda1429923327421367b69aa2e70.zip |
mm/filemap: Add folio_lock_killable()
This is like lock_page_killable() but for use by callers who
know they have a folio. Convert __lock_page_killable() to be
__folio_lock_killable(). This saves one call to compound_head() per
contended call to lock_page_killable().
__folio_lock_killable() is 19 bytes smaller than __lock_page_killable()
was. filemap_fault() shrinks by 74 bytes and __lock_page_or_retry()
shrinks by 71 bytes. That's a total of 164 bytes of text saved.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jeff Layton <jlayton@kernel.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Howells <dhowells@redhat.com>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/pagemap.h | 15 |
1 files changed, 10 insertions, 5 deletions
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 6481a431ea40..cf0ebd8c9e86 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -653,7 +653,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, } void __folio_lock(struct folio *folio); -extern int __lock_page_killable(struct page *page); +int __folio_lock_killable(struct folio *folio); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); @@ -693,6 +693,14 @@ static inline void lock_page(struct page *page) __folio_lock(folio); } +static inline int folio_lock_killable(struct folio *folio) +{ + might_sleep(); + if (!folio_trylock(folio)) + return __folio_lock_killable(folio); + return 0; +} + /* * lock_page_killable is like lock_page but can be interrupted by fatal * signals. It returns 0 if it locked the page and -EINTR if it was @@ -700,10 +708,7 @@ static inline void lock_page(struct page *page) */ static inline int lock_page_killable(struct page *page) { - might_sleep(); - if (!trylock_page(page)) - return __lock_page_killable(page); - return 0; + return folio_lock_killable(page_folio(page)); } /* |