diff options
author | Omar Sandoval <osandov@fb.com> | 2016-09-17 10:28:22 +0200 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2016-09-17 16:39:08 +0200 |
commit | 48e28166a7b608e19a6aea3acadd81cdfe660f6b (patch) | |
tree | ece76660b963252e9371c8aea0a4d04f8b69c42f | |
parent | blk-mq: abstract tag allocation out into sbitmap library (diff) | |
download | linux-48e28166a7b608e19a6aea3acadd81cdfe660f6b.tar.xz linux-48e28166a7b608e19a6aea3acadd81cdfe660f6b.zip |
sbitmap: allocate wait queues on a specific node
The original bt_alloc() we converted from was using kzalloc(), not
kzalloc_node(), to allocate the wait queues. This was probably an
oversight, so fix it for sbitmap_queue_init_node().
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
-rw-r--r-- | lib/sbitmap.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/lib/sbitmap.c b/lib/sbitmap.c index dfc084ac6937..4d8e97e470ee 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -208,7 +208,7 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, sbq->wake_batch = sbq_calc_wake_batch(depth); atomic_set(&sbq->wake_index, 0); - sbq->ws = kzalloc(SBQ_WAIT_QUEUES * sizeof(*sbq->ws), flags); + sbq->ws = kzalloc_node(SBQ_WAIT_QUEUES * sizeof(*sbq->ws), flags, node); if (!sbq->ws) { sbitmap_free(&sbq->sb); return -ENOMEM; |