diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2021-03-13 05:57:44 +0100 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-01-04 19:15:34 +0100 |
commit | 960ea971fa6cdac8d4825a6aaf99b92882e79fbb (patch) | |
tree | 9afc2b3ce4e4be564890374793d2b4489d217979 /mm/filemap.c | |
parent | filemap: Use a folio in filemap_map_pages (diff) | |
download | linux-960ea971fa6cdac8d4825a6aaf99b92882e79fbb.tar.xz linux-960ea971fa6cdac8d4825a6aaf99b92882e79fbb.zip |
filemap: Use a folio in filemap_page_mkwrite
This fixes a bug for tail pages. They always have a NULL mapping, so
the check would fail and we would never mark the folio as dirty.
Ends up growing the kernel by 19 bytes although there will be fewer
calls to compound_head() dynamically.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Diffstat (limited to 'mm/filemap.c')
-rw-r--r-- | mm/filemap.c | 16 |
1 files changed, 8 insertions, 8 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index f595563057c3..bbe982e64e62 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3349,24 +3349,24 @@ EXPORT_SYMBOL(filemap_map_pages); vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf) { struct address_space *mapping = vmf->vma->vm_file->f_mapping; - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); vm_fault_t ret = VM_FAULT_LOCKED; sb_start_pagefault(mapping->host->i_sb); file_update_time(vmf->vma->vm_file); - lock_page(page); - if (page->mapping != mapping) { - unlock_page(page); + folio_lock(folio); + if (folio->mapping != mapping) { + folio_unlock(folio); ret = VM_FAULT_NOPAGE; goto out; } /* - * We mark the page dirty already here so that when freeze is in + * We mark the folio dirty already here so that when freeze is in * progress, we are guaranteed that writeback during freezing will - * see the dirty page and writeprotect it again. + * see the dirty folio and writeprotect it again. */ - set_page_dirty(page); - wait_for_stable_page(page); + folio_mark_dirty(folio); + folio_wait_stable(folio); out: sb_end_pagefault(mapping->host->i_sb); return ret; |