summaryrefslogtreecommitdiffstats
path: root/drivers/lightnvm (follow)
Commit message (Collapse)AuthorAgeFilesLines
* lightnvm: fix uninitialized pointer in nvm_remove_tgt()Geert Uytterhoeven2019-06-211-1/+1
| | | | | | | | | | | | | | | | | | | With gcc 4.1: drivers/lightnvm/core.c: In function ‘nvm_remove_tgt’: drivers/lightnvm/core.c:510: warning: ‘t’ is used uninitialized in this function Indeed, if no NVM devices have been registered, t will be an uninitialized pointer, and may be dereferenced later. A call to nvm_remove_tgt() can be triggered from userspace by issuing the NVM_DEV_REMOVE ioctl on the lightnvm control device. Fix this by preinitializing t to NULL. Fixes: 843f2edbdde085b4 ("lightnvm: do not remove instance under global lock") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix freeing of merged pagesHeiner Litz2019-06-211-8/+10
| | | | | | | | | | bio_add_pc_page() may merge pages when a bio is padded due to a flush. Fix iteration over the bio to free the correct pages in case of a merge. Signed-off-by: Heiner Litz <hlitz@ucsc.edu> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 410Thomas Gleixner2019-06-051-15/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program see the file copying if not write to the free software foundation 675 mass ave cambridge ma 02139 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 3 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190531190112.675111872@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* treewide: Add SPDX license identifier - Makefile/KconfigThomas Gleixner2019-05-211-0/+1
| | | | | | | | | | | | | | Add SPDX license identifiers to all Make/Kconfig files which: - Have no license information of any form These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* lightnvm: pblk: use nvm_rq_to_ppa_list()Igor Konopko2019-05-062-17/+22
| | | | | | | | | | | This patch replaces few remaining usages of rqd->ppa_list[] with existing nvm_rq_to_ppa_list() helpers. This is needed for theoretical devices with ws_min/ws_opt equal to 1. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: simplify partial read pathIgor Konopko2019-05-064-281/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the approach to handling partial read path. In old approach merging of data from round buffer and drive was fully made by drive. This had some disadvantages - code was complex and relies on bio internals, so it was hard to maintain and was strongly dependent on bio changes. In new approach most of the handling is done mostly by block layer functions such as bio_split(), bio_chain() and generic_make request() and generally is less complex and easier to maintain. Below some more details of the new approach. When read bio arrives, it is cloned for pblk internal purposes. All the L2P mapping, which includes copying data from round buffer to bio and thus bio_advance() calls is done on the cloned bio, so the original bio is untouched. If we found that we have partial read case, we still have original bio untouched, so we can split it and continue to process only first part of it in current context, when the rest will be called as separate bio request which is passed to generic_make_request() for further processing. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Heiner Litz <hlitz@ucsc.edu> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: do not remove instance under global lockIgor Konopko2019-05-061-18/+16
| | | | | | | | | | | | | | | | | | | Currently all the target instances are removed under global nvm_lock. This was needed to ensure that nvm_dev struct will not be freed by hot unplug event during target removal. However, current implementation has some drawbacks, since the same lock is used when new nvme subsystem is registered, so we can have a situation, that due to long process of target removal on drive A, registration (and listing in OS) of the drive B will take a lot of time, since it will wait for that lock. Now when we have kref which ensures that nvm_dev will not be freed in the meantime, we can easily get rid of this lock for a time when we are removing nvm targets. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: track inflight target creationsIgor Konopko2019-05-061-10/+31
| | | | | | | | | | | | | | | | | | When creation process is still in progress, target is not yet on targets list. This causes a chance for removing whole lightnvm subsystem by calling nvm_unregister() in the meantime and finally by causing kernel panic inside target init function. This patch changes the behaviour by adding kref variable which tracks all the users of nvm_dev structure. When nvm_dev is allocated, kref value is set to 1. Then before every target creation the value is increased and decreased after target removal. The extra reference is decreased when nvm subsystem is unregistered. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: recover only written metadataIgor Konopko2019-05-061-3/+5
| | | | | | | | | This patch ensures that smeta was fully written before even trying to read it based on chunk table state and write pointer. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: IO path reorganizationIgor Konopko2019-05-064-61/+48
| | | | | | | | | | | | | | | | | | | | | | This patch is made in order to prepare read path for new approach to partial read handling, which is simpler in compare with previous one. The most important change is to move the handling of completed and failed bio from the pblk_make_rq() to particular read and write functions. This is needed, since after partial read path changes, sometimes completed/failed bio will be different from original one, so we cannot do this any longer in pblk_make_rq(). Other changes are small read path refactor in order to reduce the size of the following patch with partial read changes. Generally the goal of this patch is not to change the functionality, but just to prepare the code for the following changes. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: GC error handlingIgor Konopko2019-05-064-4/+12
| | | | | | | | | | | | | | | | | | | | | Currently when there is an IO error (or similar) on GC read path, pblk still move the line, which was currently under GC process to free state. Such a behaviour can lead to silent data mismatch issue. With this patch, the line which was under GC process on which some IO errors occurred, will be putted back to closed state (instead of free state as it was without this patch) and the L2P mapping for such a failed sectors will not be updated. Then in case of any user IOs to such a failed sectors, pblk would be able to return at least real IO error instead of stale data as it is right now. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: remove internal IO timeoutIgor Konopko2019-05-062-8/+1
| | | | | | | | | | | | Currently during pblk padding, there is internal IO timeout introduced, which is smaller than default NVMe timeout. This can lead to various use-after-free issues. Since in case of any IO timeouts NVMe and block layer will handle timeout by themselves and report it back to use, there is no need to keep this internal timeout in pblk. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: wait for inflight IOs in recoveryIgor Konopko2019-05-061-13/+12
| | | | | | | | | | | | | | | | | This patch changes the behaviour of recovery padding in order to support a case, when some IOs were already submitted to the drive and some next one are not submitted due to error returned. Currently in case of errors we simply exit the pad function without waiting for inflight IOs, which leads to panic on inflight IOs completion. After the changes we always wait for all the inflight IOs before exiting the function. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: propagate errors when reading metaIgor Konopko2019-05-062-3/+8
| | | | | | | | | | | | | Read errors are not correctly propagated. Errors are cleared before returning control to the io submitter. Change the behaviour such that all read errors exept high ecc read warning status is returned appropriately. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix update line wp in OOB recoveryIgor Konopko2019-05-061-1/+15
| | | | | | | | | | | | | | In case of OOB recovery, we can hit the scenario when all the data in line were written and some part of emeta was written too. In such a case pblk_update_line_wp() function will call pblk_alloc_page() function which will case to set left_msecs to value below zero (since this field does not track emeta region) and thus will lead to multiple kernel warnings. This patch fixes that issue. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: kick writer on write recovery pathIgor Konopko2019-05-061-0/+1
| | | | | | | | | | | In case of write recovery path, there is a chance that writer thread is not active, kick immediately instead of waiting for timer. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix lock order in pblk_rb_tear_down_checkIgor Konopko2019-05-061-1/+1
| | | | | | | | | | | | In pblk_rb_tear_down_check() the spinlock functions are not called in proper order. Fixes: a4bd217 ("lightnvm: physical block device (pblk) target") Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: prevent race condition on pblk removeMarcin Dziegielewski2019-05-061-0/+2
| | | | | | | | | | | | | | | When we trigger nvm target remove during device hot unplug, there is a probability to hit a general protection fault. This is caused by use of nvm_dev thay may be freed from another (hot unplug) thread (in the nvm_unregister function). Introduce lock in nvme_ioctl_dev_remove function to prevent this situation. Signed-off-by: Marcin Dziegielewski <marcin.dziegielewski@intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: set propper line as data_line after gcMarcin Dziegielewski2019-05-061-0/+1
| | | | | | | | | | | | In current implementation of l2p recovery, when we are after gc and we have open line, we are not setting current data line properly (we set last line from the device instead of last line ordered by seq_nr) and in consequence, kernel panic and data corruption. Signed-off-by: Marcin Dziegielewski <marcin.dziegielewski@intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix bio leak when bio is splitChansol Kim2019-05-061-28/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For large size io where blk_queue_split needs to be called inside pblk_rw_io, results in bio leak as bio_endio is not called on the newly allocated. One way to observe this is to mounting ext4 filesystem on the target and issuing 1MB io with dd, e.g., dd bs=1MB if=/dev/null of=/mount/myvolume. kmemleak reports: unreferenced object 0xffff88803d7d0100 (size 256): comm "kworker/u16:1", pid 68, jiffies 4294899333 (age 284.120s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 60 e8 31 81 88 ff ff .........`.1.... 01 40 00 00 06 06 00 00 00 00 00 00 05 00 00 00 .@.............. backtrace: [<000000001f5aa04f>] kmem_cache_alloc+0x204/0x3c0 [<0000000040945aab>] mempool_alloc_slab+0x1d/0x30 [<00000000b4959ab4>] mempool_alloc+0x83/0x220 [<00000000646bad9b>] bio_alloc_bioset+0x229/0x320 [<000000009264b251>] bio_clone_fast+0x26/0xc0 [<0000000008250252>] bio_split+0x41/0x110 [<00000000e365cad0>] blk_queue_split+0x349/0x930 [<00000000eb5426bc>] pblk_make_rq+0x1b5/0x1f0 [<00000000eea09cec>] generic_make_request+0x2f9/0x690 [<00000000ae6acede>] submit_bio+0x12e/0x1f0 [<00000000f9b8b82a>] ext4_io_submit+0x64/0x80 [<000000009e4f817d>] ext4_bio_write_page+0x32e/0x890 [<00000000cbd0d106>] mpage_submit_page+0x65/0xc0 [<000000000eec7359>] mpage_map_and_submit_buffers+0x171/0x330 [<000000009a7afcb6>] ext4_writepages+0xd5e/0x1650 [<000000004476b096>] do_writepages+0x39/0xc0 In case there is a need for a split, blk_queue_split returns the newly allocated bio to the caller by changing the value of pointer passed as a reference, while the original is passed to generic_make_requests. Although pblk_rw_io's local variable bio* has changed and passed to pblk_submit_read and pblk_write_to_cache, work is done on this new bio*, and pblk_rw_io returns NVM_IO_DONE, pblk_make_rq calls bio_endio on the old bio* because it passed bio pointer by value to pblk_rw_io. pblk_rw_io is unfolded into pblk_make_rq so that there is no copying of bio* and bio_endio is called on the correct bio*. Signed-off-by: Chansol Kim <chansol.kim@samsung.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: Inherit mdts from the parent nvme deviceIgor Konopko2019-05-061-2/+7
| | | | | | | | | | | | | | | | Current lightnvm and pblk implementation does not care about NVMe max data transfer size, which can be smaller than 64*K=256K. There are existing NVMe controllers which NVMe max data transfer size is lower that 256K (for example 128K, which happens for existing NVMe controllers which are NVMe spec compliant). Such a controllers are not able to handle command which contains 64 PPAs, since the the size of DMAed buffer will be above the capabilities of such a controller. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: set proper read status in bioIgor Konopko2019-05-061-6/+5
| | | | | | | | | | | | | | | | Currently in case of read errors, bi_status is not set properly which leads to returning inproper data to layers above. This patch fix that by setting proper status in case of read errors. Also remove unnecessary warn_once(), which does not make sense in that place, since user bio is not used for interation with drive and thus bi_status will not be set here. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: cleanly fail when there is not enough memoryIgor Konopko2019-05-061-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | L2P table can be huge in many cases, since it typically requires 1GB of DRAM for 1TB of drive. When there is not enough memory available, OOM killer turns on and kills random processes, which can be very annoying for users. This patch changes the flag for L2P table allocation on order to handle this situation in more user friendly way. GFP_KERNEL and __GPF_HIGHMEM are default flags used in parameterless vmalloc() calls, so they are also keeped in that patch. Additionally __GFP_NOWARN flag is added in order to hide very long dmesg warn in case of the allocation failures. The most important flag introduced in that patch is __GFP_RETRY_MAYFAIL, which would cause allocator to try use free memory and if not available to drop caches, but not to run OOM killer. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: ensure that erase is chunk alignedIgor Konopko2019-05-061-0/+1
| | | | | | | | | | | | | | The sector bits in the erase command may be uninitialized are uninitialized, causing the erase LBA to be unaligned to the chunk size. This is unexpected situation, since erase shall always be chunk aligned based on OCSSD the 2.0 specification. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix race during put lineIgor Konopko2019-05-061-6/+10
| | | | | | | | | | | | | | In the pblk_put_line_back function, a race condition with __pblk_map_invalidate can make a line not part of any lists. Fix gc_list by resetting it to null fixes the above issue. Fixes: a4bd217 ("lightnvm: physical block device (pblk) target") Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: gracefully handle GC vmalloc failIgor Konopko2019-05-061-10/+9
| | | | | | | | | | | | | | | Currently when we fail on rq data allocation in gc, it skips moving active data and moves line straigt to its free state. Losing user data in the process. Move the data allocation to an earlier phase of GC, where we can still fail gracefully by moving line back to the closed state. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: remove unused smeta_ssec fieldIgor Konopko2019-05-062-2/+0
| | | | | | | | | | | smeta_ssec field in pblk_line is never used after it was replaced by the function pblk_line_smeta_start(). Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: reduce L2P memory footprintIgor Konopko2019-05-065-11/+9
| | | | | | | | | | | | | | | | Currently L2P map size is calculated based on the total number of available sectors, which is redundant, since it contains mapping for overprovisioning as well (11% by default). Change this size to the real capacity and thus reduce the memory footprint significantly - with default op value it is approx. 110MB of DRAM less for every 1TB of media. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: rollback on error during gc readIgor Konopko2019-05-061-1/+6
| | | | | | | | | | | | | | A line is left unsigned to the blocks lists in case pblk_gc_line returns an error. This moves the line back to be appropriate list, which can then be picked up by the garbage collector. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: line reference fix in GCIgor Konopko2019-05-061-1/+4
| | | | | | | | | | | Fixes the GC error case when moving a line back to closed state while releasing additional references. Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix crash in pblk_end_partial_read due to multipage bvecsHans Holmberg2019-04-101-22/+28
| | | | | | | | | | | | | | The introduction of multipage bio vectors broke pblk's partial read logic due to it not being prepared for multipage bio vectors. Use bio vector iterators instead of direct bio vector indexing. Fixes: 07173c3ec276 ("block: enable multipage bvecs") Reported-by: Klaus Jensen <klaus.jensen@cnexlabs.com> Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Updated description. Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* pblk: fix max_io calculationJavier González2019-03-071-1/+6
| | | | | | | | | | | | | When calculating the maximun I/O size allowed into the buffer, consider the write size (ws_opt) used by the write thread in order to cover the case in which, due to flushes, the mem and subm pointers are disaligned by (ws_opt - 1). This case currently translates into a stall when an I/O of the largest possible size is submitted. Fixes: f9f9d1ae2c66 ("lightnvm: pblk: prevent stall due to wb threshold") Signed-off-by: Javier González <javier@javigon.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix race condition on GCHeiner Litz2019-02-116-7/+18
| | | | | | | | | | | | | | | | | | | | | This patch fixes a race condition where a write is mapped to the last sectors of a line. The write is synced to the device but the L2P is not updated yet. When the line is garbage collected before the L2P update is performed, the sectors are ignored by the GC logic and the line is freed before all sectors are moved. When the L2P is finally updated, it contains a mapping to a freed line, subsequent reads of the corresponding LBAs fail. This patch introduces a per line counter specifying the number of sectors that are synced to the device but have not been updated in the L2P. Lines with a counter of greater than zero will not be selected for GC. Signed-off-by: Heiner Litz <hlitz@ucsc.edu> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: prevent stall due to wb thresholdJavier González2019-02-113-10/+22
| | | | | | | | | | | | | | | | | | | | | | | | | In order to respect mw_cuinits, pblk's write buffer maintains a backpointer to protect data not yet persisted; when writing to the write buffer, this backpointer defines a threshold that pblk's rate-limiter enforces. On small PU configurations, the following scenarios might take place: (i) the threshold is larger than the write buffer and (ii) the threshold is smaller than the write buffer, but larger than the maximun allowed split bio - 256KB at this moment (Note that writes are not always split - we only do this when we the size of the buffer is smaller than the buffer). In both cases, pblk's rate-limiter prevents the I/O to be written to the buffer, thus stalling. This patch fixes the original backpointer implementation by considering the threshold both on buffer creation and on the rate-limiters path, when bio_split is triggered (case (ii) above). Fixes: 766c8ceb16fc ("lightnvm: pblk: guarantee that backpointer is respected on writer stall") Signed-off-by: Javier González <javier@javigon.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: extend line wp balance checkHans Holmberg2019-02-111-18/+38
| | | | | | | | | | | | | | | | | | | | pblk stripes writes of minimal write size across all non-offline chunks in a line, which means that the maximum write pointer delta should not exceed the minimal write size. Extend the line write pointer balance check to cover this case, and ignore the offline chunk wps. This will render us a warning during recovery if something unexpected has happened to the chunk write pointers (i.e. powerloss, a spurious chunk reset, ..). Reported-by: Zhoujie Wu <zjwu@marvell.com> Tested-by: Zhoujie Wu <zjwu@marvell.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix TRACE_INCLUDE_PATHMasahiro Yamada2019-02-111-1/+1
| | | | | | | | | | | | | | | | As the comment block in include/trace/define_trace.h says, TRACE_INCLUDE_PATH should be a relative path to the define_trace.h ../../drivers/lightnvm is the correct relative path. ../../../drivers/lightnvm is working by coincidence because the top Makefile adds -I$(srctree)/arch/$(SRCARCH)/include as a header search path, but we should not rely on it. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: Switch to use new generic UUID APIAndy Shevchenko2019-02-114-15/+10
| | | | | | | | | | | | There are new types and helpers that are supposed to be used in new code. As a preparation to get rid of legacy types and API functions do the conversion here. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: Use u64 instead of __le64 for CPU visible sideAndy Shevchenko2019-02-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Sparse complains about using strict data types: drivers/lightnvm/pblk-read.c:254:43: warning: incorrect type in assignment (different base types) drivers/lightnvm/pblk-read.c:254:43: expected restricted __le64 <noident> drivers/lightnvm/pblk-read.c:254:43: got unsigned long long [unsigned] [usertype] <noident> drivers/lightnvm/pblk-read.c:255:29: warning: cast from restricted __le64 drivers/lightnvm/pblk-read.c:268:29: warning: cast from restricted __le64 drivers/lightnvm/pblk-read.c:328:41: warning: incorrect type in assignment (different base types) drivers/lightnvm/pblk-read.c:328:41: expected restricted __le64 <noident> drivers/lightnvm/pblk-read.c:328:41: got unsigned long long [unsigned] [usertype] <noident> In the code it seems explicit that lba_list_mem and lba_list_media members of struct pblk_pr_ctx are used on CPU side, which means they should not be of strict types. Change types of lba_list_mem and lba_list_media members to be u64. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Javier González <javier@javigon.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: use vfree to free metadata on error pathHans Holmberg2019-02-111-1/+1
| | | | | | | | | | As chunk metadata is allocated using vmalloc, we need to free it using vfree. Fixes: 090ee26fd512 ("lightnvm: use internal allocation for chunk log page") Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: stop taking the free lock in in pblk_lines_freeHans Holmberg2019-02-111-2/+0
| | | | | | | | | | | | | pblk_line_meta_free might sleep (it can end up calling vfree, depending on how we allocate lba lists), and this can lead to a BUG() if we wake up on a different cpu and release the lock. As there is no point of grabbing the free lock when pblk has shut down, remove the lock. Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: fix use-after-free bugGustavo A. R. Silva2018-12-221-1/+0
| | | | | | | | | | | | | Remove one of the calls to function bio_put(), so *bio* is only freed once. Notice that bio is being dereferenced in bio_put(), hence leading to a use-after-free bug once *bio* has already been freed. Addresses-Coverity-ID: 1475952 ("Use after free") Fixes: 55d8ec35398e ("lightnvm: pblk: support packed metadata") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: do not overwrite ppa list with meta listIgor Konopko2018-12-111-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | Ehen using pblk with 0 sized metadata both ppa list and meta list points to the same memory since pblk_dma_meta_size() returns 0 in that case. This patch fix that issue by ensuring that pblk_dma_meta_size() always returns space equal to sizeof(struct pblk_sec_meta) and thus ppa list and meta list points to different memory address. Even that in that case drive does not really care about meta_list pointer, this is the easiest way to fix that issue without introducing changes in many places in the code just for 0 sized metadata case. The same approach needs to be also done for pblk_get_sec_meta() since we also cannot point to the same memory address in meta buffer when we are using it for pblk recovery process Reported-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Tested-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: support packed metadataIgor Konopko2018-12-119-20/+122
| | | | | | | | | | | | | | | | | | | pblk performs recovery of open lines by storing the LBA in the per LBA metadata field. Recovery therefore only works for drives that has this field. This patch adds support for packed metadata, which store l2p mapping for open lines in last sector of every write unit and enables drives without per IO metadata to recover open lines. After this patch, drives with OOB size <16B will use packed metadata and metadata size larger than16B will continue to use the device per IO metadata. Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: disable interleaved metadataIgor Konopko2018-12-111-0/+6
| | | | | | | | | | | | | | | | | Currently pblk only check the size of I/O metadata and does not take into account if this metadata is in a separate buffer or interleaved in a single metadata buffer. In reality only the first scenario is supported, where second mode will break pblk functionality during any IO operation. This patch prevents pblk to be instantiated in case device only supports interleaved metadata. Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: dynamic DMA pool entry sizeIgor Konopko2018-12-115-10/+19
| | | | | | | | | | | | | | | | | Currently lightnvm and pblk uses single DMA pool, for which the entry size always is equal to PAGE_SIZE. The contents of each entry allocated from the DMA pool consists of a PPA list (8bytes * 64), leaving 56bytes * 64 space for metadata. Since the metadata field can be bigger, such as 128 bytes, the static size does not cover this use-case. This patch adds support for I/O metadata above 56 bytes by changing DMA pool size based on device meta size and allows pblk to use OOB metadata >=16B. Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: add helpers for OOB metadataIgor Konopko2018-12-116-32/+69
| | | | | | | | | | | | | pblk currently assumes that size of OOB metadata on drive is always equal to size of pblk_sec_meta struct. This commit add helpers which will allow to handle different sizes of OOB metadata on drive in the future. After this patch only OOB metadata equal to 16 bytes is supported. Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: move lba list to partial read contextIgor Konopko2018-12-112-15/+7
| | | | | | | | | | | | Currently DMA allocated memory is reused on partial read for lba_list_mem and lba_list_media arrays. In preparation for dynamic DMA pool sizes we need to move this arrays into pblk_pr_ctx structures. Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: avoid ref warning on cache creationJavier González2018-12-111-9/+5
| | | | | | | | | | | | | | | | The current kref implementation around pblk global caches triggers a false positive on refcount_inc_checked() (when called) as the kref is initialized to 0. Instead of usint kref_inc() on a 0 reference, which is in principle correct, use kref_init() to avoid the check. This is also more explicit about what actually happens on cache creation. In the process, do a small refactoring to use kref helpers. Fixes: 1864de94ec9d6 "lightnvm: pblk: stop recreating global caches" Signed-off-by: Javier González <javier@cnexlabs.com> Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: simplify geometry enumerationMatias Bjørling2018-12-111-7/+5
| | | | | | | | | | | | | | | | | | | Currently the geometry of an OCSSD is enumerated using a two step approach: First, nvm_register is called, the OCSSD identify command is issued, and second the geometry sos and csecs values are read either from the OCSSD identify if it is a 1.2 drive, or from the NVMe namespace data structure if it is a 2.0 device. This patch recombines it into a single step, such that nvm_register can use the csecs and sos fields independent of which version is used. This enables one to dynamically size the lightnvm subsystem dma pool. Reviewed-by: Igor Konopko <igor.j.konopko@intel.com> Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* lightnvm: pblk: add comments wrt locking in recovery pathJavier González2018-12-112-0/+4
| | | | | | | | | | pblk's recovery path is single threaded and therefore a number of assumptions regarding concurrency can be made. To avoid confusion, make this explicit with a couple of comments in the code. Signed-off-by: Javier González <javier@cnexlabs.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>