diff options
author | Omar Sandoval <osandov@fb.com> | 2016-09-17 10:28:25 +0200 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2016-09-17 16:39:14 +0200 |
commit | 98d95416dbfaf4910caadfb4ddc75e4aacbdff8c (patch) | |
tree | ed6a08e6d4358da522265ac1e6a595fe8db35572 /lib | |
parent | sbitmap: push alloc policy into sbitmap_queue (diff) | |
download | linux-98d95416dbfaf4910caadfb4ddc75e4aacbdff8c.tar.xz linux-98d95416dbfaf4910caadfb4ddc75e4aacbdff8c.zip |
sbitmap: randomize initial alloc_hint values
In order to get good cache behavior from a sbitmap, we want each CPU to
stick to its own cacheline(s) as much as possible. This might happen
naturally as the bitmap gets filled up and the alloc_hint values spread
out, but we really want this behavior from the start. blk-mq apparently
intended to do this, but the code to do this was never wired up. Get rid
of the dead code and make it part of the sbitmap library.
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'lib')
-rw-r--r-- | lib/sbitmap.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/lib/sbitmap.c b/lib/sbitmap.c index be55f744b771..928b82a733f2 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -15,6 +15,7 @@ * along with this program. If not, see <https://www.gnu.org/licenses/>. */ +#include <linux/random.h> #include <linux/sbitmap.h> int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, @@ -211,6 +212,11 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, return -ENOMEM; } + if (depth && !round_robin) { + for_each_possible_cpu(i) + *per_cpu_ptr(sbq->alloc_hint, i) = prandom_u32() % depth; + } + sbq->wake_batch = sbq_calc_wake_batch(depth); atomic_set(&sbq->wake_index, 0); |