diff options
author | Jarek Poplawski <jarkao2@o2.pl> | 2007-06-29 07:11:47 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2007-06-29 07:11:47 +0200 |
commit | 17200811cf539b9107a99a39bf71ba3567966285 (patch) | |
tree | 11763c9163f8d521acc74c9b89faa0210860b2f1 /net/core | |
parent | Merge master.kernel.org:/pub/scm/linux/kernel/git/vxy/lksctp-dev (diff) | |
download | linux-17200811cf539b9107a99a39bf71ba3567966285.tar.xz linux-17200811cf539b9107a99a39bf71ba3567966285.zip |
[NETPOLL] netconsole: fix soft lockup when removing module
#1
Until kernel ver. 2.6.21 (including) cancel_rearming_delayed_work()
required a work function should always (unconditionally) rearm with
delay > 0 - otherwise it would endlessly loop. This patch replaces
this function with cancel_delayed_work(). Later kernel versions don't
require this, so here it's only for uniformity.
#2
After deleting a timer in cancel_[rearming_]delayed_work() there could
stay a last skb queued in npinfo->txq causing a memory leak after
kfree(npinfo).
Initial patch & testing by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Jarek Poplawski <jarkao2@o2.pl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core')
-rw-r--r-- | net/core/netpoll.c | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/net/core/netpoll.c b/net/core/netpoll.c index f8e74e511ce6..cf40ff91ac01 100644 --- a/net/core/netpoll.c +++ b/net/core/netpoll.c @@ -72,7 +72,8 @@ static void queue_process(struct work_struct *work) netif_tx_unlock(dev); local_irq_restore(flags); - schedule_delayed_work(&npinfo->tx_work, HZ/10); + if (atomic_read(&npinfo->refcnt)) + schedule_delayed_work(&npinfo->tx_work, HZ/10); return; } netif_tx_unlock(dev); @@ -785,9 +786,15 @@ void netpoll_cleanup(struct netpoll *np) if (atomic_dec_and_test(&npinfo->refcnt)) { skb_queue_purge(&npinfo->arp_tx); skb_queue_purge(&npinfo->txq); - cancel_rearming_delayed_work(&npinfo->tx_work); + cancel_delayed_work(&npinfo->tx_work); flush_scheduled_work(); + /* clean after last, unfinished work */ + if (!skb_queue_empty(&npinfo->txq)) { + struct sk_buff *skb; + skb = __skb_dequeue(&npinfo->txq); + kfree_skb(skb); + } kfree(npinfo); } } |