diff options
author | Olaf Kirch <olaf.kirch@oracle.com> | 2007-06-24 08:11:52 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2007-06-24 08:11:52 +0200 |
commit | 5b5a60da281c767196427ce8144deae6ec46b389 (patch) | |
tree | 02ac728c14eb8fa0bd49ac8ede6f15e760ddc3f3 /net/core | |
parent | [NET]: Re-enable irqs before pushing pending DMA requests (diff) | |
download | linux-5b5a60da281c767196427ce8144deae6ec46b389.tar.xz linux-5b5a60da281c767196427ce8144deae6ec46b389.zip |
[NET]: Make skb_seq_read unmap the last fragment
Having walked through the entire skbuff, skb_seq_read would leave the
last fragment mapped. As a consequence, the unwary caller would leak
kmaps, and proceed with preempt_count off by one. The only (kind of
non-intuitive) workaround is to use skb_seq_read_abort.
This patch makes sure skb_seq_read always unmaps frag_data after
having cycled through the skb's paged part.
Signed-off-by: Olaf Kirch <olaf.kirch@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core')
-rw-r--r-- | net/core/skbuff.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 8d43ae6979e5..27cfe5fe4bb9 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1706,6 +1706,11 @@ next_skb: st->stepped_offset += frag->size; } + if (st->frag_data) { + kunmap_skb_frag(st->frag_data); + st->frag_data = NULL; + } + if (st->cur_skb->next) { st->cur_skb = st->cur_skb->next; st->frag_idx = 0; |