summaryrefslogtreecommitdiffstats
path: root/block/blk-mq-cpumap.c
diff options
context:
space:
mode:
authorHans Holmberg <hans.holmberg@cnexlabs.com>2017-10-13 14:46:38 +0200
committerJens Axboe <axboe@kernel.dk>2017-10-13 16:34:57 +0200
commit75610cd974aba4fadc9a8500d5470e8f28a3626f (patch)
tree9c96ca149e7f46ecbbcab50063fb986724b12dba /block/blk-mq-cpumap.c
parentlightnvm: pblk: start gc if needed during init (diff)
downloadlinux-75610cd974aba4fadc9a8500d5470e8f28a3626f.tar.xz
linux-75610cd974aba4fadc9a8500d5470e8f28a3626f.zip
lightnvm: pblk: consider bad sectors in emeta during recovery
When recovering lines we need to consider that bad blocks in a line affect the emeta area size. Previously it was assumed that the emeta area would grow by the number of sectors per page * number of bad blocks in the line. This assumption is not correct - the number of "extra" pages that are consumed could be both smaller (depending on emeta size) and bigger (depending on the placement of the bad blocks). Fix this by calculating the emeta start by iterating backwards through the line, skipping ppas that map to bad blocks. Also fix the data types used for ppa indices/counts in pblk_recov_l2p_from_emeta - we should use u64. Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq-cpumap.c')
0 files changed, 0 insertions, 0 deletions