diff options
author | Quentin Young <qlyoung@cumulusnetworks.com> | 2017-04-25 00:33:25 +0200 |
---|---|---|
committer | Quentin Young <qlyoung@cumulusnetworks.com> | 2017-05-09 22:44:19 +0200 |
commit | ffa2c8986d204f4a3e7204258fd6906af4a57c93 (patch) | |
tree | 6242b8634bc2a264339a05dcfb20b94f63c252f4 /lib/spf_backoff.c | |
parent | Merge pull request #478 from opensourcerouting/test-extension (diff) | |
download | frr-ffa2c8986d204f4a3e7204258fd6906af4a57c93.tar.xz frr-ffa2c8986d204f4a3e7204258fd6906af4a57c93.zip |
*: remove THREAD_ON macros, add nullity check
The way thread.c is written, a caller who wishes to be able to cancel a
thread or avoid scheduling it twice must keep a reference to the thread.
Typically this is done with a long lived pointer whose value is checked
for null in order to know if the thread is currently scheduled. The
check-and-schedule idiom is so common that several wrapper macros in
thread.h existed solely to provide it.
This patch removes those macros and adds a new parameter to all
thread_add_* functions which is a pointer to the struct thread * to
store the result of a scheduling call. If the value passed is non-null,
the thread will only be scheduled if the value is null. This helps with
consistency.
A Coccinelle spatch has been used to transform code of the form:
if (t == NULL)
t = thread_add_* (...)
to the form
thread_add_* (..., &t)
The THREAD_ON macros have also been transformed to the underlying
thread.c calls.
Signed-off-by: Quentin Young <qlyoung@cumulusnetworks.com>
Diffstat (limited to 'lib/spf_backoff.c')
-rw-r--r-- | lib/spf_backoff.c | 16 |
1 files changed, 7 insertions, 9 deletions
diff --git a/lib/spf_backoff.c b/lib/spf_backoff.c index e923f232b..9a9af8db2 100644 --- a/lib/spf_backoff.c +++ b/lib/spf_backoff.c @@ -169,21 +169,19 @@ long spf_backoff_schedule(struct spf_backoff *backoff) { case SPF_BACKOFF_QUIET: backoff->state = SPF_BACKOFF_SHORT_WAIT; - THREAD_TIMER_MSEC_ON(backoff->m, backoff->t_timetolearn, - spf_backoff_timetolearn_elapsed, backoff, - backoff->timetolearn); - THREAD_TIMER_MSEC_ON(backoff->m, backoff->t_holddown, - spf_backoff_holddown_elapsed, backoff, - backoff->holddown); + thread_add_timer_msec(backoff->m, spf_backoff_timetolearn_elapsed, + backoff, backoff->timetolearn, + &backoff->t_timetolearn); + thread_add_timer_msec(backoff->m, spf_backoff_holddown_elapsed, backoff, + backoff->holddown, &backoff->t_holddown); backoff->first_event_time = now; rv = backoff->init_delay; break; case SPF_BACKOFF_SHORT_WAIT: case SPF_BACKOFF_LONG_WAIT: THREAD_TIMER_OFF(backoff->t_holddown); - THREAD_TIMER_MSEC_ON(backoff->m, backoff->t_holddown, - spf_backoff_holddown_elapsed, backoff, - backoff->holddown); + thread_add_timer_msec(backoff->m, spf_backoff_holddown_elapsed, backoff, + backoff->holddown, &backoff->t_holddown); if (backoff->state == SPF_BACKOFF_SHORT_WAIT) rv = backoff->short_delay; else |