diff options
author | Davidlohr Bueso <dave@stgolabs.net> | 2023-06-12 20:10:34 +0200 |
---|---|---|
committer | Dan Williams <dan.j.williams@intel.com> | 2023-06-25 23:54:51 +0200 |
commit | 0c36b6ad436a38b167af16e6c690c890b8b2df62 (patch) | |
tree | bd72d99f338a3a05b50949e769be2ced8aa328f1 /drivers/cxl/core | |
parent | cxl/mem: Introduce security state sysfs file (diff) | |
download | linux-0c36b6ad436a38b167af16e6c690c890b8b2df62.tar.xz linux-0c36b6ad436a38b167af16e6c690c890b8b2df62.zip |
cxl/mbox: Add sanitization handling machinery
Sanitization is by definition a device-monopolizing operation, and thus
the timeslicing rules for other background commands do not apply.
As such handle this special case asynchronously and return immediately.
Subsequent changes will allow completion to be pollable from userspace
via a sysfs file interface.
For devices that don't support interrupts for notifying background
command completion, self-poll with the caveat that the poller can
be out of sync with the ready hardware, and therefore care must be
taken to not allow any new commands to go through until the poller
sees the hw completion. The poller takes the mbox_mutex to stabilize
the flagging, minimizing any runtime overhead in the send path to
check for 'sanitize_tmo' for uncommon poll scenarios.
The irq case is much simpler as hardware will serialize/error
appropriately.
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lore.kernel.org/r/20230612181038.14421-4-dave@stgolabs.net
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Diffstat (limited to 'drivers/cxl/core')
-rw-r--r-- | drivers/cxl/core/memdev.c | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 1bbb7e39fc93..834f418b6bcb 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -460,11 +460,21 @@ void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cm } EXPORT_SYMBOL_NS_GPL(clear_exclusive_cxl_commands, CXL); +static void cxl_memdev_security_shutdown(struct device *dev) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + + if (cxlds->security.poll) + cancel_delayed_work_sync(&cxlds->security.poll_dwork); +} + static void cxl_memdev_shutdown(struct device *dev) { struct cxl_memdev *cxlmd = to_cxl_memdev(dev); down_write(&cxl_memdev_rwsem); + cxl_memdev_security_shutdown(dev); cxlmd->cxlds = NULL; up_write(&cxl_memdev_rwsem); } |