summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* blk-mq: rework flush sequencing logicChristoph Hellwig2014-02-107-117/+76
| | | | | | | | | | | | | | | | | | | Witch to using a preallocated flush_rq for blk-mq similar to what's done with the old request path. This allows us to set up the request properly with a tag from the actually allowed range and ->rq_disk as needed by some drivers. To make life easier we also switch to dynamic allocation of ->flush_rq for the old path. This effectively reverts most of "blk-mq: fix for flush deadlock" and "blk-mq: Don't reserve a tag for flush request" Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* null_blk: use blk_complete_request and blk_mq_complete_requestChristoph Hellwig2014-02-101-65/+32
| | | | | | | | Use the block layer helpers for CPU-local completions instead of reimplementing them locally. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* virtio_blk: use blk_mq_complete_requestChristoph Hellwig2014-02-101-3/+4
| | | | | | | | Make sure to complete requests on the submitting CPU. Previously this was done in blk_mq_end_io, but the responsibility shifted to the drivers. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* blk-mq: rework I/O completionsChristoph Hellwig2014-02-104-24/+37
| | | | | | | | | | | | Rework I/O completions to work more like the old code path. blk_mq_end_io now stays out of the business of deferring completions to others CPUs and calling blk_mark_rq_complete. The latter is very important to allow completing requests that have timed out and thus are already marked completed, the former allows using the IPI callout even for driver specific completions instead of having to reimplement them. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* fs: Add prototype declaration to appropriate header file include/linux/bio.hRashika Kheria2014-02-091-0/+1
| | | | | | | | | | | | Add prototype declaration to header file include/linux/bio.h because it is used by more than one file. This eliminates the following warning in bio-integrity.c: fs/bio-integrity.c:214:14: warning: no previous prototype for ‘bio_integrity_tag_size’ [-Wmissing-prototypes] Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Signed-off-by: Jens Axboe <axboe@fb.com>
* fs: Mark function as static in fs/bio-integrity.cRashika Kheria2014-02-091-1/+2
| | | | | | | | | | | | Mark functions as static in bio-integrity.c because it is not used outside this file. This eliminates the following warnings in bio-integrity.c: fs/bio-integrity.c:224:5: warning: no previous prototype for ‘bio_integrity_tag’ [-Wmissing-prototypes] Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Signed-off-by: Jens Axboe <axboe@fb.com>
* block/null_blk: Fix completion processing from LIFO to FIFOShlomo Pongratz2014-02-071-0/+2
| | | | | | | | | | | | | | The completion queue is implemented using lockless list. The llist_add is adds the events to the list head which is a push operation. The processing of the completion elements is done by disconnecting all the pushed elements and iterating over the disconnected list. The problem is that the processing is done in reverse order w.r.t order of the insertion i.e. LIFO processing. By reversing the disconnected list which is done in linear time the desired FIFO processing is achieved. Signed-off-by: Shlomo Pongratz <shlomop@mellanox.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* block: Explicitly handle discard/write same segmentsKent Overstreet2014-02-071-29/+62
| | | | | | | | | | | | | | | | Immutable biovecs changed the way biovecs are interpreted - drivers no longer use bi_vcnt, they have to go by bi_iter.bi_size (to allow for using part of an existing segment without modifying it). This breaks with discards and write_same bios, since for those bi_size has nothing to do with segments in the biovec. So for now, we need a fairly gross hack - we fortunately know that there will never be more than one segment for the entire request, so we can special case discard/write_same. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Tested-by: Hugh Dickins <hughd@google.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* block: Fix nr_vecs for inline integrity vectorsMartin K. Petersen2014-02-071-1/+9
| | | | | | | | | | | | | Commit 9f060e2231ca changed the way we handle allocations for the integrity vectors. When the vectors are inline there is no associated slab and consequently bvec_nr_vecs() returns 0. Ensure that we check against BIP_INLINE_VECS in that case. Reported-by: David Milburn <dmilburn@redhat.com> Tested-by: David Milburn <dmilburn@redhat.com> Cc: stable@vger.kernel.org # v3.10+ Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* blk-mq: Add bio_integrity setup to blk_mq_make_requestNicholas Bellinger2014-02-071-0/+5
| | | | | | | | | | This patch adds the missing bio_integrity_enabled() + bio_integrity_prep() setup into blk_mq_make_request() in order to use DIF protection with scsi-mq. Cc: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Jens Axboe <axboe@fb.com>
* blk-mq: initialize sg_reserved_sizeChristoph Hellwig2014-02-071-0/+2
| | | | | | | To behave the same way as the old request path. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* blk-mq: handle dma_drain_sizeChristoph Hellwig2014-02-071-0/+10
| | | | | | | | Make blk-mq handle the dma_drain_size field the same way as the old request path. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* blk-mq: divert __blk_put_request for MQ opsChristoph Hellwig2014-02-071-0/+5
| | | | | | | | | __blk_put_request needs to call into the blk-mq code just like blk_put_request. As we don't have the queue lock in this case both end up calling the same function. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* blk-mq: support at_head inserations for blk_execute_rqChristoph Hellwig2014-02-073-9/+13
| | | | | | | | This is neede for proper SG_IO operation as well as various uses of blk_execute_rq from the SCSI midlayer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
* Merge branch 'bcache-for-3.14' of git://evilpiepirate.org/~kent/linux-bcache ↵Jens Axboe2014-01-306-10/+15
|\ | | | | | | into for-linus
| * bcache: bugfix - gc thread now gets woken when cache is fullNicholas Swenson2014-01-291-3/+3
| | | | | | | | Signed-off-by: Nicholas Swenson <nks@daterainc.com>
| * bcache: Minor fixes from kbuild robotKent Overstreet2014-01-294-5/+8
| | | | | | | | Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * bcache: fix BUG_ON due to integer overflow with GC_SECTORS_USEDDarrick J. Wong2014-01-292-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The BUG_ON at the end of __bch_btree_mark_key can be triggered due to an integer overflow error: BITMASK(GC_SECTORS_USED, struct bucket, gc_mark, 2, 13); ... SET_GC_SECTORS_USED(g, min_t(unsigned, GC_SECTORS_USED(g) + KEY_SIZE(k), (1 << 14) - 1)); BUG_ON(!GC_SECTORS_USED(g)); In bcache.h, the SECTORS_USED bitfield is defined to be 13 bits wide. While the SET_ code tries to ensure that the field doesn't overflow by clamping it to (1<<14)-1 == 16383, this is incorrect because 16383 requires 14 bits. Therefore, if GC_SECTORS_USED() + KEY_SIZE() = 8192, the SET_ statement tries to store 8192 into a 13-bit field. In a 13-bit field, 8192 becomes zero, thus triggering the BUG_ON. Therefore, create a field width constant and a max value constant, and use those to create the bitfield and check the inputs to SET_GC_SECTORS_USED. Arguably the BITMASK() template ought to have BUG_ON checks for too-large values, but that's a separate patch. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
* | block: __elv_next_request() shouldn't call into the elevator if bypassingTejun Heo2014-01-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | request_queue bypassing is used to suppress higher-level function of a request_queue so that they can be switched, reconfigured and shut down. A request_queue does the followings while bypassing. * bypasses elevator and io_cq association and queues requests directly to the FIFO dispatch queue. * bypasses block cgroup request_list lookup and always uses the root request_list. Once confirmed to be bypassing, specific elevator and block cgroup policy implementations can assume that nothing is in flight for them and perform various operations which would be dangerous otherwise. Such confirmation is acheived by short-circuiting all new requests directly to the dispatch queue and waiting for all the requests which were issued before to finish. Unfortunately, while the request allocating and draining sides were properly handled, we forgot to actually plug the request dispatch path. Even after bypassing mode is confirmed, if the attached driver tries to fetch a request and the dispatch queue is empty, __elv_next_request() would invoke the current elevator's elevator_dispatch_fn() callback. As all in-flight requests were drained, the elevator wouldn't contain any request but once bypass is confirmed we don't even know whether the elevator is even there. It might be in the process of being switched and half torn down. Frank Mayhar reports that this actually happened while switching elevators, leading to an oops. Let's fix it by making __elv_next_request() avoid invoking the elevator_dispatch_fn() callback if the queue is bypassing. It already avoids invoking the callback if the queue is dying. As a dying queue is guaranteed to be bypassing, we can simply replace blk_queue_dying() check with blk_queue_bypass(). Reported-by: Frank Mayhar <fmayhar@google.com> References: http://lkml.kernel.org/g/1390319905.20232.38.camel@bobble.lax.corp.google.com Cc: stable@vger.kernel.org Tested-by: Frank Mayhar <fmayhar@google.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | blk-mq: Don't reserve a tag for flush requestShaohua Li2014-01-303-19/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reserving a tag (request) for flush to avoid dead lock is a overkill. A tag is valuable resource. We can track the number of flush requests and disallow having too many pending flush requests allocated. With this patch, blk_mq_alloc_request_pinned() could do a busy nop (but not a dead loop) if too many pending requests are allocated and new flush request is allocated. But this should not be a problem, too many pending flush requests are very rare case. I verified this can fix the deadlock caused by too many pending flush requests. Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | percpu_ida: fix a live lockShaohua Li2014-01-301-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | steal_tags only happens when free tags is more than half of the total tags. This is too strict and can cause live lock. I found that if one cpu has free tags, but other cpu can't steal (thread is bound to specific cpus), threads which want to allocate tags are always sleeping. I found this when I run next patch, but this could happen without it I think. I did performance test too with null_blk. Two cases (each cpu has enough percpu tags, or total tags are limited), no performance changes were observed. Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge branch 'for-3.14/drivers' of git://git.kernel.dk/linux-blockLinus Torvalds2014-01-3039-1895/+2435
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull block IO driver changes from Jens Axboe: - bcache update from Kent Overstreet. - two bcache fixes from Nicholas Swenson. - cciss pci init error fix from Andrew. - underflow fix in the parallel IDE pg_write code from Dan Carpenter. I'm sure the 1 (or 0) users of that are now happy. - two PCI related fixes for sx8 from Jingoo Han. - floppy init fix for first block read from Jiri Kosina. - pktcdvd error return miss fix from Julia Lawall. - removal of IRQF_SHARED from the SEGA Dreamcast CD-ROM code from Michael Opdenacker. - comment typo fix for the loop driver from Olaf Hering. - potential oops fix for null_blk from Raghavendra K T. - two fixes from Sam Bradshaw (Micron) for the mtip32xx driver, fixing an OOM problem and a problem with handling security locked conditions * 'for-3.14/drivers' of git://git.kernel.dk/linux-block: (47 commits) mg_disk: Spelling s/finised/finished/ null_blk: Null pointer deference problem in alloc_page_buffers mtip32xx: Correctly handle security locked condition mtip32xx: Make SGL container per-command to eliminate high order dma allocation drivers/block/loop.c: fix comment typo in loop_config_discard drivers/block/cciss.c:cciss_init_one(): use proper errnos drivers/block/paride/pg.c: underflow bug in pg_write() drivers/block/sx8.c: remove unnecessary pci_set_drvdata() drivers/block/sx8.c: use module_pci_driver() floppy: bail out in open() if drive is not responding to block0 read bcache: Fix auxiliary search trees for key size > cacheline size bcache: Don't return -EINTR when insert finished bcache: Improve bucket_prio() calculation bcache: Add bch_bkey_equal_header() bcache: update bch_bkey_try_merge bcache: Move insert_fixup() to btree_keys_ops bcache: Convert sorting to btree_keys bcache: Convert debug code to btree_keys bcache: Convert btree_iter to struct btree_keys bcache: Refactor bset_tree sysfs stats ...
| * | mg_disk: Spelling s/finised/finished/Geert Uytterhoeven2014-01-221-1/+1
| | | | | | | | | | | | | | | Signed-off-by: Geert Uytterhoeven <geert+renesas@linux-m68k.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | null_blk: Null pointer deference problem in alloc_page_buffersRaghavendra K T2014-01-221-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we load the null_blk module with bs=8k we get following oops: [ 3819.812190] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 [ 3819.812387] IP: [<ffffffff81170aa5>] create_empty_buffers+0x28/0xaf [ 3819.812527] PGD 219244067 PUD 215a06067 PMD 0 [ 3819.812640] Oops: 0000 [#1] SMP [ 3819.812772] Modules linked in: null_blk(+) Fix that by resetting block size to PAGE_SIZE if it is greater than PAGE_SIZE Reported-by: Sumanth <sumantk2@linux.vnet.ibm.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Reviewed-by: Matias Bjorling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | mtip32xx: Correctly handle security locked conditionSam Bradshaw2014-01-222-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If power is removed during a secure erase, the drive will end up in a security locked condition. This patch causes the driver to identify, log, and flag the security lock state. IOs are prevented from submission to the drive until the locked state is addressed with a secure erase. Bumped version number to reflect this capability. Signed-off-by: Sam Bradshaw <sbradshaw@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | mtip32xx: Make SGL container per-command to eliminate high order dma allocationSam Bradshaw2014-01-222-97/+149
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The mtip32xx driver makes a high order dma memory allocation to store a command index table, some dedicated buffers, and a command header & SGL blob. This allocation can fail with a surprise insert under low & fragmented memory conditions. This patch breaks these regions up into separate low order allocations and increases the maximum number of segments a single command SGL can have. We wanted to allow at least 256 segments for 1 MB direct IO. Since the command header occupies the first 0x80 bytes of the SGL blob, that meant we needed two 4k pages to contain the header and SGL. The two pages allow up to 504 SGL segments. Signed-off-by: Sam Bradshaw <sbradshaw@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | Merge branch 'for-jens' of ↵Jens Axboe2014-01-222-10/+29
| |\ \ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jikos/linux-block into for-3.14/drivers
| | * | floppy: bail out in open() if drive is not responding to block0 readJiri Kosina2014-01-172-10/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case reading of block 0 during open() fails, it is not the right thing to let open() succeed. Fix this by introducing FD_OPEN_SHOULD_FAIL_BIT flag, and setting it in case the bio callback encounters an error while trying to read block 0. As a bonus, this works around certain broken userspace (blkid), which is not able to properly handle read()s returning IO errors. Hence be nice to those, and bail out during open() already; if block 0 is not readable, read()s are not going to provide any meaningful data anyway. Signed-off-by: Jiri Kosina <jkosina@suse.cz>
| * | | drivers/block/loop.c: fix comment typo in loop_config_discardOlaf Hering2014-01-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Discard requests are ignored if the encryption is enabled for the given loop device. Update comment to match the code, and similar comments elsewhere in the file. Signed-off-by: Olaf Hering <olaf@aepfle.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | drivers/block/cciss.c:cciss_init_one(): use proper errnosAndrew Morton2014-01-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pci_driver.probe should return a meaningful errno, not -1. Cc: Jens Axboe <axboe@kernel.dk> Cc: Stephen M. Cameron <scameron@beardog.cce.hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | drivers/block/paride/pg.c: underflow bug in pg_write()Dan Carpenter2014-01-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The test here can underflow so we pass bogus lengths to the hardware. It's a static checker fix and I don't know the impact. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | drivers/block/sx8.c: remove unnecessary pci_set_drvdata()Jingoo Han2014-01-221-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The driver core clears the driver data to NULL after device_release or on probe failure. Thus, it is not needed to manually clear the device driver data to NULL. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | drivers/block/sx8.c: use module_pci_driver()Jingoo Han2014-01-221-14/+1
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | Use module_pci_driver() macro which makes the code smaller and simpler. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | bcache: Fix auxiliary search trees for key size > cacheline sizeKent Overstreet2014-01-081-14/+14
| | | | | | | | | | | | Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Don't return -EINTR when insert finishedKent Overstreet2014-01-081-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | We need to return -EINTR after a split because we invalidated iterators (and freed the btree node) - but if we were finished inserting, we don't want to redo the traversal. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Improve bucket_prio() calculationKent Overstreet2014-01-082-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | When deciding what order to reuse buckets we take into account both the bucket's priority (which indicates lru order) and also the amount of live data in that bucket. The way they were scaled together wasn't as correct as it could be... this patch improves and documents it. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add bch_bkey_equal_header()Nicholas Swenson2014-01-083-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Checks if two keys have equivalent header fields. (good enough for replacement or merging) Used in bch_bkey_try_merge, and replacing a key in the btree. Signed-off-by: Nicholas Swenson <nks@daterainc.com> Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: update bch_bkey_try_mergeNicholas Swenson2014-01-083-16/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | Added generic header checks to bch_bkey_try_merge, which then calls the bkey specific function Removed extraneous checks from bch_extent_merge Signed-off-by: Nicholas Swenson <nks@daterainc.com>
| * | bcache: Move insert_fixup() to btree_keys_opsKent Overstreet2014-01-084-229/+257
| | | | | | | | | | | | | | | | | | | | | Now handling overlapping extents/keys is a method that's specific to what the btree node contains. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert sorting to btree_keysKent Overstreet2014-01-083-36/+33
| | | | | | | | | | | | | | | | | | More work to disentangle various code from struct btree Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert debug code to btree_keysKent Overstreet2014-01-089-217/+264
| | | | | | | | | | | | | | | | | | More work to disentangle various code from struct btree Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert btree_iter to struct btree_keysKent Overstreet2014-01-086-38/+41
| | | | | | | | | | | | | | | | | | More work to disentangle bset.c from struct btree Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Refactor bset_tree sysfs statsKent Overstreet2014-01-083-47/+54
| | | | | | | | | | | | | | | | | | | | | | | | We're in the process of turning bset.c into library code, so none of the code in that file should know about struct cache_set or struct btree - so, move the btree traversal part of the stats code to sysfs.c. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add bch_btree_keys_u64s_remaining()Kent Overstreet2014-01-083-13/+31
| | | | | | | | | | | | | | | | | | Helper function to explicitly check how much space is free in a btree node Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add struct btree_keysKent Overstreet2014-01-089-264/+323
| | | | | | | | | | | | | | | | | | Soon, bset.c won't need to depend on struct btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Abstract out stuff needed for sortingKent Overstreet2014-01-089-289/+423
| | | | | | | | | | | | Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Rename/shuffle various code aroundKent Overstreet2014-01-088-276/+341
| | | | | | | | | | | | | | | | | | More work to disentangle bset.c from the rest of the code: Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add struct bset_sort_stateKent Overstreet2014-01-086-49/+87
| | | | | | | | | | | | | | | | | | | | | More disentangling bset.c from the rest of the bcache code - soon, the sorting routines won't have any dependencies on any outside structs. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Split out sort_extent_cmp()Kent Overstreet2014-01-084-32/+73
| | | | | | | | | | | | | | | | | | | | | Only use extent comparison for comparing extents, so we're not using START_KEY() on other key types (i.e. btree pointers) Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Bkey indexing renamingKent Overstreet2014-01-087-53/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | More refactoring: node() -> bset_bkey_idx() end() -> bset_bkey_last() Signed-off-by: Kent Overstreet <kmo@daterainc.com>