| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compute the numa information for a virtio_pmem device from the memory
range of the device. Previously, the target_node was always 0 since
the ndr_desc.target_node field was never explicitly set. The code for
computing the numa node is taken from cxl_pmem_region_probe in
drivers/cxl/pmem.c.
Signed-off-by: Michael Sammler <sammler@google.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Tested-by: Mina Almasry <almasrymina@google.com>
Message-Id: <20221115214036.1571015-1-sammler@google.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The NVDIMM region could be available before the virtio_device_ready()
that is called by virtio_dev_probe(). This means the driver tries to
use device before DRIVER_OK which violates the spec, fixing this by
set device ready before the nvdimm_pmem_region_create().
Note that this means the virtio_pmem_host_ack() could be triggered
before the creation of the nd region, this is safe since the pmem_lock
has been initialized and whether or not any available buffer is added
before is validated by virtio_pmem_host_ack().
Fixes 6e84200c0a29 ("virtio-pmem: Add virtio pmem driver")
Acked-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20220628083430.61856-2-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to initialize the provider_data manually after
nvdimm_pemm_region_create(). This seems to be racy if the flush is
issued before the initialization of provider_data[1]. Fixing this by
initializing the provider_data through nd_region_desc to make sure the
provider_data is ready after the pmem is created.
[1]:
[ 80.152281] nd_pmem namespace0.0: unable to guarantee persistence of writes
[ 92.393956] BUG: kernel NULL pointer dereference, address: 0000000000000318
[ 92.394551] #PF: supervisor read access in kernel mode
[ 92.394955] #PF: error_code(0x0000) - not-present page
[ 92.395365] PGD 0 P4D 0
[ 92.395566] Oops: 0000 [#1] PREEMPT SMP PTI
[ 92.395867] CPU: 2 PID: 506 Comm: mkfs.ext4 Not tainted 5.19.0-rc1+ #453
[ 92.396365] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009),
BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[ 92.397178] RIP: 0010:virtio_pmem_flush+0x2f/0x1f0
[ 92.397521] Code: 55 41 54 55 53 48 81 ec a0 00 00 00 65 48 8b 04
25 28 00 00 00 48 89 84 24 98 00 00 00 31 c0 48 8b 87 78 03 00 00 48
89 04 24 <48> 8b 98 18 03 00 00 e8 85 bf 6b 00 ba 58 00 00 00 be c0 0c
00 00
[ 92.398982] RSP: 0018:ffff9a7380aefc88 EFLAGS: 00010246
[ 92.399349] RAX: 0000000000000000 RBX: ffff8e77c3f86f00 RCX: 0000000000000000
[ 92.399833] RDX: ffffffffad4ea720 RSI: ffff8e77c41e39c0 RDI: ffff8e77c41c5c00
[ 92.400388] RBP: ffff8e77c41e39c0 R08: ffff8e77c19f0600 R09: 0000000000000000
[ 92.400874] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8e77c0814e28
[ 92.401364] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8e77c41e39c0
[ 92.401849] FS: 00007f3cd75b2780(0000) GS:ffff8e7937d00000(0000)
knlGS:0000000000000000
[ 92.402423] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 92.402821] CR2: 0000000000000318 CR3: 0000000103c80002 CR4: 0000000000370ee0
[ 92.403307] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 92.403793] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 92.404278] Call Trace:
[ 92.404481] <TASK>
[ 92.404654] ? mempool_alloc+0x5d/0x160
[ 92.404939] ? terminate_walk+0x5f/0xf0
[ 92.405226] ? bio_alloc_bioset+0xbb/0x3f0
[ 92.405525] async_pmem_flush+0x17/0x80
[ 92.405806] nvdimm_flush+0x11/0x30
[ 92.406067] pmem_submit_bio+0x1e9/0x200
[ 92.406354] __submit_bio+0x80/0x120
[ 92.406621] submit_bio_noacct_nocheck+0xdc/0x2a0
[ 92.406958] submit_bio_wait+0x4e/0x80
[ 92.407234] blkdev_issue_flush+0x31/0x50
[ 92.407526] ? punt_bios_to_rescuer+0x230/0x230
[ 92.407852] blkdev_fsync+0x1e/0x30
[ 92.408112] do_fsync+0x33/0x70
[ 92.408354] __x64_sys_fsync+0xb/0x10
[ 92.408625] do_syscall_64+0x43/0x90
[ 92.408895] entry_SYSCALL_64_after_hwframe+0x46/0xb0
[ 92.409257] RIP: 0033:0x7f3cd76c6c44
Fixes 6e84200c0a29 ("virtio-pmem: Add virtio pmem driver")
Acked-by: Pankaj Gupta <pankaj.gupta@amd.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20220628083430.61856-1-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This will enable cleanups down the road.
The idea is to disable cbs, then add "flush_queued_cbs" callback
as a parameter, this way drivers can flush any work
queued after callbacks have been disabled.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://lore.kernel.org/r/20211013105226.20225-1-mst@redhat.com
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
|
|
|
|
| |
Virtio pmem is modern-only. Use LE accessors for config space.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This patch adds virtio-pmem driver for KVM guest.
Guest reads the persistent memory range information from
Qemu over VIRTIO and registers it on nvdimm_bus. It also
creates a nd_region object with the persistent memory
range information so that existing 'nvdimm/pmem' driver
can reserve this into system memory map. This way
'virtio-pmem' driver uses existing functionality of pmem
driver to register persistent memory compatible for DAX
capable filesystems.
This also provides function to perform guest flush over
VIRTIO from 'pmem' driver when userspace performs flush
on DAX memory range.
Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jakub Staron <jstaron@google.com>
Tested-by: Jakub Staron <jstaron@google.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
|