diff options
author | Dan Williams <dan.j.williams@intel.com> | 2015-11-13 03:33:54 +0100 |
---|---|---|
committer | Dan Williams <dan.j.williams@intel.com> | 2015-11-13 03:33:54 +0100 |
commit | 152d7bd80dca5ce77ec2d7313149a2ab990e808e (patch) | |
tree | 0278dcde82a608216233147c2adf58fa0911b7b0 /fs | |
parent | libnvdimm: documentation clarifications (diff) | |
download | linux-152d7bd80dca5ce77ec2d7313149a2ab990e808e.tar.xz linux-152d7bd80dca5ce77ec2d7313149a2ab990e808e.zip |
dax: fix __dax_pmd_fault crash
Since 4.3 introduced devm_memremap_pages() the pfns handled by DAX may
optionally have a struct page backing. When a mapped pfn reaches
vmf_insert_pfn_pmd() it fails with a crash signature like the following:
kernel BUG at mm/huge_memory.c:905!
[..]
Call Trace:
[<ffffffff812a73ba>] __dax_pmd_fault+0x2ea/0x5b0
[<ffffffffa01a4182>] xfs_filemap_pmd_fault+0x92/0x150 [xfs]
[<ffffffff811fbe02>] handle_mm_fault+0x312/0x1b50
Fix this by falling back to 4K mappings in the pfn_valid() case. Longer
term, vmf_insert_pfn_pmd() needs to grow support for architectures that
can provide a 'pmd_special' capability.
Cc: <stable@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Reported-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/dax.c | 7 |
1 files changed, 7 insertions, 0 deletions
@@ -627,6 +627,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) goto fallback; + /* + * TODO: teach vmf_insert_pfn_pmd() to support + * 'pte_special' for pmds + */ + if (pfn_valid(pfn)) + goto fallback; + if (buffer_unwritten(&bh) || buffer_new(&bh)) { int i; for (i = 0; i < PTRS_PER_PMD; i++) |