summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* lightnvm: do not assume sequential lun alloc.Javier González2016-05-061-3/+2
| | | | | | | | | | | | When doing GC, rrpc calculates the physical LUN to which the rrpc block belongs too. This calculation is based on the assumption that LUNs are assigned sequentially to the LUN list. Use the reference to the LUN instead. This saves us the calculation and allows us to align LUNs in a different manner to, for example, take advantage of devide parallelism. Signed-off-by: Javier González <javier@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme/lightnvm: Log using the ctrl named deviceSagi Grimberg2016-05-061-7/+9
| | | | | | | | Align with the rest of the nvme subsystem. Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: rename dma helper functionsJavier González2016-05-063-11/+11
| | | | | | | | | | | Until now, the dma pool have been exclusively used to allocate the ppa list being sent to the device. In pblk (upcoming), we use these pools to allocate metadata too. Thus, we generalize the names of some variables on the dma helper functions to make the code more readable. Signed-off-by: Javier González <javier@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: enable metadata to be sent to deviceJavier González2016-05-062-2/+3
| | | | | | | | | | | Enable metadata buffer to be sent to the device through the metadata field on the physical rw nvme command. The size of the metadata buffer must follow dev->oob_size * # of PPAs. Signed-off-by: Javier González <javier@cnexlabs.com> Updated description. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: do not free unused metadata on rrpcJavier González2016-05-061-2/+0
| | | | | | | | | rrpc does not save any metadata on a given request. Thus, do not attempt to free the metadata dma region. Signed-off-by: Javier González <javier@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: fix out of bound ppa lun id on bb tblMatias Bjørling2016-05-061-6/+1
| | | | | | | | | | | | | | | The ppa configured for retrieving the bad block table uses the internal lun id to setup the get bad block ppa. This increases monotonically with the number luns available. When configuring a ppa, the channel and lun must be specified separately, leading to an out of bound memory access in gennvm_block_bb when lun id goes beyond the luns available within a channel. Additional, remove out of bound check in gennvm_block_bb(), as it was a buggy to begin with. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: refactor set_bb_tbl for accepting ppa listMatias Bjørling2016-05-063-6/+6
| | | | | | | | | | The set_bb_tbl takes struct nvm_rq and only uses its ppa_list and nr_pages internally. Instead, make these two variables explicit. This allows a user to call it without initializing a struct nvm_rq first. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: move responsibility for bad blk mgmt to targetMatias Bjørling2016-05-061-19/+16
| | | | | | | | | | We move the responsibility of managing the persistent bad block table to the target. The target may choose to mark a block bad or retry writing to it. Never the less, it should be the target that makes the decision and not the media manager. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: make nvm_set_rqd_ppalist() aware of vblksMatias Bjørling2016-05-063-16/+19
| | | | | | | | | | | | | | | | | A virtual block enables a block to identify multiple physical blocks. This is useful for metadata where a device media supports multiple planes. In that case, a block, with multiple planes can be managed as a single vblk. Reducing the metadata required by one forth. nvm_set_rqd_ppalist() takes care of expanding a ppa_list with vblks automatically. However, for some use-cases, where only a single physical block is required, the ppa_list should not be expanded. Therefore, add a vblk parameter to nvm_set_rqd_ppalist(), and only expand the ppa_list if vblk is set. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: remove struct factory_blksMatias Bjørling2016-05-061-34/+28
| | | | | | | | Now that device ops->get_bb_table no longer uses a callback, the struct factory_blks can be removed. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: refactor device ops->get_bb_tbl()Matias Bjørling2016-05-065-84/+111
| | | | | | | | | | | | | | | | | | The device ops->get_bb_tbl() takes a callback, that allows the caller to use its own callback function to update its data structures in the returning function. This makes it difficult to send parameters to the callback, and usually is circumvented by small private structures, that both carry the callers state and any flags needed to fulfill the update. Refactor ops->get_bb_tbl() to fill a data buffer with the status of the blocks returned, and let the user call the callback function manually. That will provide the necessary flags and data structures and simplify the logic around ops->get_bb_tbl(). Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: introduce nvm_for_each_lun_ppa() macroMatias Bjørling2016-05-062-38/+33
| | | | | | | | | | | | | | Users that wish to iterate all luns on a device. Must create a struct ppa_addr and separate iterators for channels and luns. To set the iterators, two loops are required, one to iterate channels, and another to iterate luns. This leads to decrease in readability. Introduce nvm_for_each_lun_ppa, which implements the nested loop and sets ppa, channel, and lun variable for each loop body, eliminating the boilerplate code. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: refactor dev->online_target to global nvm_targetsSimon A. F. Lund2016-05-062-23/+25
| | | | | | | | | | | | | | A target name must be unique. However, a per-device registration of targets is maintained on a dev->online_targets list, with a per-device search for targets upon registration. This results in a name collision when two targets, with the same name, are created on two different targets, where the per-device list is not shared. Signed-off-by: Simon A. F. Lund <slund@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: rename nvm_targets to nvm_tgt_typeSimon A. F. Lund2016-05-063-12/+12
| | | | | | | | | | | | | The functions nvm_register_target(), nvm_unregister_target() and associated list refers to a target type that is being registered by a target type module. Rename nvm_*_targets() to nvm_*_tgt_type(), so that the intension is clear. This enables target instances to use the _nvm_*_targets() naming. Signed-off-by: Simon A. F. Lund <slund@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: store rrpc->soffset in device sector sizeWenwei Tao2016-05-061-7/+10
| | | | | | | | | | | | Since we mainly use soffset in device sector size, we therefore store this value in rrpc->soffset, instead of the offset in 512byte sector size. This eliminates the "(ilog2(dev->sec_size) - 9)" calculation on each I/O. Signed-off-by: Wenwei Tao <ww.tao0320@gmail.com> Updated patch description. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: calculate rrpc total blocks and sectors up frontWenwei Tao2016-05-061-4/+2
| | | | | | | | | | | Calculate rrpc total blocks and sectors up front, make sense to use them. For example, we use rrpc->nr_sects to calculate rrpc area size, but it makes no sense if we don't initialize it up front, since it would be zero until we finish rrpc luns init. Signed-off-by: Wenwei Tao <ww.tao0320@gmail.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: avoid memory leak when lun_map kcalloc failsMatias Bjørling2016-05-061-23/+30
| | | | | | | | | | | | | A memory leak occurs if the lower page table is initialized and the following dev->lun_map fails on allocation. Rearrange the initialization of lower page table to allow dev->lun_map to fail gracefully without memory leak. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Move kfree of dev->lun_map to nvm_free() Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: move block fold outside of get_bb_tbl()Matias Bjørling2016-05-065-58/+73
| | | | | | | | | | | | | | | | The get block table command returns a list of blocks and planes with their associated state. Users, such as gennvm and sysblk, manages all planes as a single virtual block. It was therefore natural to fold the bad block list before it is returned. However, to allow users, which manages on a per-plane block level, to also use the interface, the get_bb_tbl interface is changed to not fold by default and instead let the caller fold if necessary. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: add fpg_size and pfpg_size to struct nvm_devMatias Bjørling2016-05-063-10/+11
| | | | | | | | | | The flash page size (fpg) and size across planes (pfpg) are convenient to know when allocating buffer sizes. This has previously been a calculated in various places. Replace with the pre-calculated values. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: implement nvm_submit_ppa_listMatias Bjørling2016-05-062-19/+71
| | | | | | | | | | | | The nvm_submit_ppa function assumes that users manage all plane blocks as a single block. Extend the API with nvm_submit_ppa_list to allow the user to send its own ppa list. If the user submits more than a single PPA, the user must take care to allocate and free the corresponding ppa list. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: handle submit_io failureMatias Bjørling2016-05-061-0/+5
| | | | | | | | | | The device ->submit_io() callback might fail to submit I/O to device. In that case, the nvm_submit_ppa function should not wait for completion. Instead return the ->submit_io() error. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* lightnvm: fix "warning: ‘ret’ may be used uninitialized"Jeff Mahoney2016-05-061-2/+2
| | | | | | | | | | | | | | | | This fixes the following warnings: drivers/lightnvm/sysblk.c:125:9: warning: ‘ret’ may be used uninitialized in this function drivers/lightnvm/sysblk.c:275:15: warning: ‘ret’ may be used uninitialized in this function In both cases, ret is only set from within a loop that may not be entered. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* NVMe: Fix reset/remove raceKeith Busch2016-05-031-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a scenario where device is present and being reset, but a request to unbind the driver occurs. A previous patch series addressing a device failure removal scenario flushed reset_work after controller disable to unblock reset_work waiting on a completion that wouldn't occur. This isn't safe as-is. The broken scenario can potentially be induced with: modprobe nvme && modprobe -r nvme To fix, the reset work is flushed immediately after setting the controller removing flag, and any subsequent reset will not proceed with controller initialization if the flag is set. The controller status must be polled while active, so the watchdog timer is also left active until the controller is disabled to cleanup requests that may be stuck during namespace removal. [Fixes: ff23a2a15a2117245b4599c1352343c8b8fb4c43] Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: fix nvme_ns_remove() deadlockMing Lin2016-05-021-2/+4
| | | | | | | | | | | | | | | | | | | | On receipt of a namespace attribute changed AER, we acquire the namespace mutex lock before proceeding to scan and validate the namespace list. In case of namespace detach/delete command, nvme_ns_remove function deadlocks trying to acquire the already held lock. All callers, except nvme_remove_namespaces(), of nvme_ns_remove() already held namespaces_mutex. So we can simply fix the deadlock by not acquiring the mutex in nvme_ns_remove() and acquiring it in nvme_remove_namespaces(). Reported-by: Sunad Bhandary S <sunad.s@samsung.com> Signed-off-by: Ming Lin <ming.l@ssi.samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimerg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: switch to RCU freeing the namespaceMing Lin2016-05-021-10/+11
| | | | | | | | | | | | | Switch to RCU freeing the namespace structure so that nvme_start_queues, nvme_stop_queues and nvme_kill_queues would be able to get away with only a RCU read side critical section. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lin <ming.l@ssi.samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimerg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* NVMe: correct comment for offset enum of controller registers in nvme.hWang Sheng-Hui2016-05-021-2/+2
| | | | | | | | | | | | Section 3.1 gives the comment for the offset of controller registers in the specification 1.2a. Some are mis-copied in the header file nvme.h. Correct them. Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: add helper nvme_cleanup_cmd()Ming Lin2016-05-022-2/+7
| | | | | | | | | | | This hides command cleanup into nvme.h and fabrics drivers will also use it. Signed-off-by: Ming Lin <ming.l@ssi.samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: move AER handling to common codeChristoph Hellwig2016-05-023-40/+66
| | | | | | | | | The transport driver still needs to do the actual submission, but all the higher level code can be shared. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: move namespace scanning to coreChristoph Hellwig2016-05-023-32/+34
| | | | | | | | | | | | | | Move the scan work item and surrounding code to the common code. For now we need a new finish_scan method to allow the PCI driver to set the irq affinity hints, but I have plans in the works to obsolete this as well. Note that this moves the namespace scanning from nvme_wq to the system workqueue, but as we don't rely on namespace scanning to finish from reset or I/O this should be fine. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by Jon Derrick: <jonathan.derrick@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: tighten up state check for namespace scanningChristoph Hellwig2016-05-021-2/+4
| | | | | | | | | | | We only should be scanning namespaces if the controller is live. Currently we call the function just before setting it live, so fix the code up to move the call to nvme_queue_scan to just below the state change. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Acked-by Jon Derrick: <jonathan.derrick@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: introduce a controller state machineChristoph Hellwig2016-05-023-13/+74
| | | | | | | | | | Replace the adhoc flags in the PCI driver with a state machine in the core code. Based on code from Sagi Grimberg for the Fabrics driver. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Acked-by Jon Derrick: <jonathan.derrick@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: remove the io_incapable methodChristoph Hellwig2016-05-022-20/+0
| | | | | | | | | It's unused since "NVMe: Move error handling to failed reset handler". Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jon Derrick <jonathan.derrick@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
* NVMe: nvme_core_exit() should do cleanup in the reverse order as ↵Wang Sheng-Hui2016-05-021-1/+1
| | | | | | | | | | | | | | | nvme_core_init does nvme_core_init does: 1) register_blkdev 2) __register_chrdev 3) class_create nvme_core_exit should do cleanup in the reverse order. Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* NVMe: Fix check_flush_dependency warningKeith Busch2016-05-021-0/+1
| | | | | | | | | | | | If the controller fails and is degraded after a reset, we need to kill off all requests queues before removing the inaccessble namespaces. This will prevent del_gendisk from syncing dirty data, which we can't due from a WQ_MEM_RECLAIM work queue. Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* NVMe: small typo in section BLK_DEV_NVME_SCSI of host/KconfigWang Sheng-Hui2016-04-261-1/+1
| | | | | | | | | "as well as " is miss typed "as well a " in section "config BLK_DEV_NVME_SCSI" Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: fix cntlid typeChristoph Hellwig2016-04-261-1/+1
| | | | | | | | Controller IDs in NVMe are unsigned 16-bit types. In the Fabrics driver we actually pass ctrl->id by reference, so we need it to have the correct type. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* skd: remove broken discard supportJeff Moyer2016-04-261-58/+1
| | | | | | | | | | | | | | | | Simply creating a file system on an skd device, followed by mount and fstrim will result in errors in the logs and then a BUG(). Let's remove discard support from that driver. As far as I can tell, it hasn't worked right since it was merged. This patch also has a side-effect of cleaning up an unintentional shadowed declaration inside of skd_end_request. I tested to ensure that I can still do I/O to the device using xfstests ./check -g quick. I didn't do anything more extensive than that, though. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* block: kill off q->flush_flagsJens Axboe2016-04-138-28/+32
| | | | | | | | Now that we converted everything to the newer block write cache interface, kill off the queue flush_flags and queueable flush entries. Signed-off-by: Jens Axboe <axboe@fb.com>
* nvme: Avoid reset work on watchdog timer function during error recoveryGuilherme G. Piccoli2016-04-131-8/+30
| | | | | | | | | | | | | | | | | | | | | | This patch adds a check on nvme_watchdog_timer() function to avoid the call to reset_work() when an error recovery process is ongoing on controller. The check is made by looking at pci_channel_offline() result. If we don't check for this on nvme_watchdog_timer(), error recovery mechanism can't recover well, because reset_work() won't be able to do its job (since we're in the middle of an error) and so the controller is removed from the system before error recovery mechanism can perform slot reset (which would allow the adapter to recover). In this patch we also have split the huge condition expression on nvme_watchdog_timer() by introducing an auxiliary function to help make the code more readable. Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* NVMe: silence warning about unused 'dev'Jens Axboe2016-04-131-2/+2
| | | | | | | | | | | | | | Depending on options, we might not be using dev in nvme_cancel_io(): drivers/nvme/host/pci.c: In function ‘nvme_cancel_io’: drivers/nvme/host/pci.c:970:19: warning: unused variable ‘dev’ [-Wunused-variable] struct nvme_dev *dev = data; ^ So get rid of it, and just cast for the dev_dbg_ratelimited() call. Fixes: 82b4552b91c4 ("nvme: Use blk-mq helper for IO termination") Signed-off-by: Jens Axboe <axboe@fb.com>
* block: kill blk_queue_flush()Jens Axboe2016-04-133-23/+2
| | | | | | | | We don't have any drivers left using it, so kill it off. Update documentation to use the newer blk_queue_write_cache(). Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* um: switch to using blk_queue_write_cache()Jens Axboe2016-04-131-1/+1
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* mtd: switch to using blk_queue_write_cache()Jens Axboe2016-04-131-1/+1
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* mmc/block: switch to using blk_queue_write_cache()Jens Axboe2016-04-131-1/+1
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* md: update to using blk_queue_write_cache()Jens Axboe2016-04-131-1/+1
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* ide-disk: update to using blk_queue_write_cache()Jens Axboe2016-04-131-3/+3
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* xen-blkfront: switch to using blk_queue_write_cache()Jens Axboe2016-04-131-1/+2
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* dm: switch to using blk_queue_write_cache()Jens Axboe2016-04-131-4/+4
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* bcache: switch to using blk_queue_write_cache()Jens Axboe2016-04-131-1/+1
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* virtio_blk: switch to using blk_queue_write_cache()Jens Axboe2016-04-131-5/+1
| | | | | Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>