diff options
author | Andi Kleen <ak@linux.intel.com> | 2018-02-28 22:43:28 +0100 |
---|---|---|
committer | Theodore Ts'o <tytso@mit.edu> | 2018-03-01 00:01:16 +0100 |
commit | e8e8a2e47db6bb85bb0cb21e77b5c6aaedf864b4 (patch) | |
tree | f41b73f9f7cde6bc468e7bc0c46c812e4d8205f9 /drivers/char/random.c | |
parent | random: always fill buffer in get_random_bytes_wait (diff) | |
download | linux-e8e8a2e47db6bb85bb0cb21e77b5c6aaedf864b4.tar.xz linux-e8e8a2e47db6bb85bb0cb21e77b5c6aaedf864b4.zip |
random: optimize add_interrupt_randomness
add_interrupt_randomess always wakes up
code blocking on /dev/random. This wake up is done
unconditionally. Unfortunately this means all interrupts
take the wait queue spinlock, which can be rather expensive
on large systems processing lots of interrupts.
We saw 1% cpu time spinning on this on a large macro workload
running on a large system.
I believe it's a recent regression (?)
Always check if there is a waiter on the wait queue
before waking up. This check can be done without
taking a spinlock.
1.06% 10460 [kernel.vmlinux] [k] native_queued_spin_lock_slowpath
|
---native_queued_spin_lock_slowpath
|
--0.57%--_raw_spin_lock_irqsave
|
--0.56%--__wake_up_common_lock
credit_entropy_bits
add_interrupt_randomness
handle_irq_event_percpu
handle_irq_event
handle_edge_irq
handle_irq
do_IRQ
common_interrupt
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Diffstat (limited to 'drivers/char/random.c')
-rw-r--r-- | drivers/char/random.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/char/random.c b/drivers/char/random.c index 11c23ca57430..ee0c0d18f1eb 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -709,7 +709,8 @@ retry: } /* should we wake readers? */ - if (entropy_bits >= random_read_wakeup_bits) { + if (entropy_bits >= random_read_wakeup_bits && + wq_has_sleeper(&random_read_wait)) { wake_up_interruptible(&random_read_wait); kill_fasync(&fasync, SIGIO, POLL_IN); } |