summaryrefslogtreecommitdiffstats
path: root/net/ipv4/tcp_ipv4.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* [IPV4] TCPMD5: Use memmove() instead of memcpy() because we have overlaps.YOSHIFUJI Hideaki2007-11-211-4/+4
| | | | | Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4] TCPMD5: Omit redundant NULL check for kfree() argument.YOSHIFUJI Hideaki2007-11-211-2/+1
| | | | | Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [INET]: Remove per bucket rwlock in tcp/dccp ehash table.Eric Dumazet2007-11-071-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | As done two years ago on IP route cache table (commit 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per hash bucket for the huge TCP/DCCP hash tables. On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for litle performance differences. (we hit a different cache line for the rwlock, but then the bucket cache line have a better sharing factor among cpus, since we dirty it less often). For netstat or ss commands that want a full scan of hash table, we perform fewer memory accesses. Using a 'small' table of hashed rwlocks should be more than enough to provide correct SMP concurrency between different buckets, without using too much memory. Sizing of this table depends on num_possible_cpus() and various CONFIG settings. This patch provides some locking abstraction that may ease a future work using a different model for TCP/DCCP table. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Use the {DEFINE|REF}_PROTO_INUSE infrastructureEric Dumazet2007-11-071-0/+3
| | | | | | | | | | | Trivial patch to make "tcp,udp,udplite,raw" protocols uses the fast "inuse sockets" infrastructure Each protocol use then a static percpu var, instead of a dynamic one. This saves some ram and some cpu cycles Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SG] Get rid of __sg_mark_end()Jens Axboe2007-11-021-1/+1
| | | | | | | | | | | | sg_mark_end() overwrites the page_link information, but all users want __sg_mark_end() behaviour where we just set the end bit. That is the most natural way to use the sg list, since you'll fill it in and then mark the end point. So change sg_mark_end() to only set the termination bit. Add a sg_magic debug check as well, and clear a chain pointer if it is set. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* [NET]: Fix incorrect sg_mark_end() calls.David S. Miller2007-10-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes scatterlist corruptions added by commit 68e3f5dd4db62619fdbe520d36c9ebf62e672256 [CRYPTO] users: Fix up scatterlist conversion errors The issue is that the code calls sg_mark_end() which clobbers the sg_page() pointer of the final scatterlist entry. The first part fo the fix makes skb_to_sgvec() do __sg_mark_end(). After considering all skb_to_sgvec() call sites the most correct solution is to call __sg_mark_end() in skb_to_sgvec() since that is what all of the callers would end up doing anyways. I suspect this might have fixed some problems in virtio_net which is the sole non-crypto user of skb_to_sgvec(). Other similar sg_mark_end() cases were converted over to __sg_mark_end() as well. Arguably sg_mark_end() is a poorly named function because it doesn't just "mark", it clears out the page pointer as a side effect, which is what led to these bugs in the first place. The one remaining plain sg_mark_end() call is in scsi_alloc_sgtable() and arguably it could be converted to __sg_mark_end() if only so that we can delete this confusing interface from linux/scatterlist.h Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP] MD5: Remove some more unnecessary casting.Matthias M. Dellweg2007-10-301-5/+5
| | | | | | | | | | while reviewing the tcp_md5-related code further i came across with another two of these casts which you probably have missed. I don't actually think that they impose a problem by now, but as you said we should remove them. Signed-off-by: Matthias M. Dellweg <2500@gmx.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix scatterlist handling in MD5 signature support.David S. Miller2007-10-261-0/+5
| | | | | | Use sg_init_table() and sg_mark_end() as needed. Signed-off-by: David S. Miller <davem@davemloft.net>
* [INET]: local port range robustnessStephen Hemminger2007-10-111-1/+0
| | | | | | | | | | | | | | | Expansion of original idea from Denis V. Lunev <den@openvz.org> Add robustness and locking to the local_port_range sysctl. 1. Enforce that low < high when setting. 2. Use seqlock to ensure atomic update. The locking might seem like overkill, but there are cases where sysadmin might want to change value in the middle of a DoS attack. Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Make /proc/net per network namespaceEric W. Biederman2007-10-111-2/+3
| | | | | | | | | | | | | | | | | | This patch makes /proc/net per network namespace. It modifies the global variables proc_net and proc_net_stat to be per network namespace. The proc_net file helpers are modified to take a network namespace argument, and all of their callers are fixed to pass &init_net for that argument. This ensures that all of the /proc/net files are only visible and usable in the initial network namespace until the code behind them has been updated to be handle multiple network namespaces. Making /proc/net per namespace is necessary as at least some files in /proc/net depend upon the set of network devices which is per network namespace, and even more files in /proc/net have contents that are relevant to a single network namespace. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix MD5 signature handling on big-endian.David S. Miller2007-09-291-10/+9
| | | | | | | | | | | | | | | | | | | Based upon a report and initial patch by Peter Lieven. tcp4_md5sig_key and tcp6_md5sig_key need to start with the exact same members as tcp_md5sig_key. Because they are both cast to that type by tcp_v{4,6}_md5_do_lookup(). Unfortunately tcp{4,6}_md5sig_key use a u16 for the key length instead of a u8, which is what tcp_md5sig_key uses. This just so happens to work by accident on little-endian, but on big-endian it doesn't. Instead of casting, just place tcp_md5sig_key as the first member of the address-family specific structures, adjust the access sites, and kill off the ugly casts. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Invoke tcp_sendmsg() directly, do not use inet_sendmsg().David S. Miller2007-08-031-1/+0
| | | | | | | | | | | | | | | As discovered by Evegniy Polyakov, if we try to sendmsg after a connection reset, we can do incredibly stupid things. The core issue is that inet_sendmsg() tries to autobind the socket, but we should never do that for TCP. Instead we should just go straight into TCP's sendmsg() code which will do all of the necessary state and pending socket error checks. TCP's sendpage already directly vectors to tcp_sendpage(), so this merely brings sendmsg() in line with that. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCPv4]: Improve BH latency in /proc/net/tcpHerbert Xu2007-07-111-14/+5
| | | | | | | | | | | | | | | | | | | | | Currently the code for /proc/net/tcp disable BH while iterating over the entire established hash table. Even though we call cond_resched_softirq for each entry, we still won't process softirq's as regularly as we would otherwise do which results in poor performance when the system is loaded near capacity. This anomaly comes from the 2.4 code where this was all in a single function and the local_bh_disable might have made sense as a small optimisation. The cost of each local_bh_disable is so small when compared against the increased latency in keeping it disabled over a large but mostly empty TCP established hash table that we should just move it to the individual read_lock/read_unlock calls as we do in inet_diag. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Disable TSO if MD5SIG is enabled.David S. Miller2007-06-121-1/+2
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Honour sk_bound_dev_if in tcp_v4_send_ackPatrick McHardy2007-06-071-0/+2
| | | | | | | | | | | A time_wait socket inherits sk_bound_dev_if from the original socket, but it is not used when sending ACK packets using ip_send_reply. Fix by passing the oif to ip_send_reply in struct ip_reply_arg and use it for output routing. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Fix "ipOutNoRoutes" counter error for TCP and UDPWei Dong2007-06-041-1/+4
| | | | | Signed-off-by: Wei Dong <weidong@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Treat CHECKSUM_PARTIAL as CHECKSUM_UNNECESSARYHerbert Xu2007-04-261-2/+1
| | | | | | | | | When a transmitted packet is looped back directly, CHECKSUM_PARTIAL maps to the semantics of CHECKSUM_UNNECESSARY. Therefore we should treat it as such in the stack. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Use csum_start offset instead of skb_transport_headerHerbert Xu2007-04-261-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | The skb transport pointer is currently used to specify the start of the checksum region for transmit checksum offload. Unfortunately, the same pointer is also used during receive side processing. This creates a problem when we want to retransmit a received packet with partial checksums since the skb transport pointer would be overwritten. This patch solves this problem by creating a new 16-bit csum_start offset value to replace the skb transport header for the purpose of checksums. This offset is calculated from skb->head so that it does not have to change when skb->data changes. No extra space is required since csum_offset itself fits within a 16-bit word so we can use the other 16 bits for csum_start. For backwards compatibility, just before we push a packet with partial checksums off into the device driver, we set the skb transport header to what it would have been under the old scheme. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: tcp_memory_pressure and tcp_socket are__read_mostly candidatesEric Dumazet2007-04-261-1/+1
| | | | | | | | | | | | | | | | | | tcp_memory_pressure and tcp_socket currently share a cache line with tcp_memory_allocated, tcp_sockets_allocated. (Very hot cache line) It makes sense to declare these variables as __read_mostly, to avoid false sharing on SMP. ffffffff8081d9c0 B tcp_orphan_count ffffffff8081d9c4 B tcp_memory_allocated ffffffff8081d9c8 B tcp_sockets_allocated ffffffff8081d9cc B tcp_memory_pressure ffffffff8081d9d0 b tcp_md5sig_users ffffffff8081d9d8 b tcp_md5sig_pool ffffffff8081d9e0 b warntime.31570 ffffffff8081d9e8 b tcp_socket Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SK_BUFF]: Introduce tcp_hdr(), remove skb->h.thArnaldo Carvalho de Melo2007-04-261-16/+16
| | | | | Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Introduce tcp_hdrlen() and tcp_optlen()Arnaldo Carvalho de Melo2007-04-261-1/+1
| | | | | | | | | | | | The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to avoid the longer, open coded equivalent. Ditched a no-op in bnx2 in the process. I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()... Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SK_BUFF]: Introduce icmp_hdr(), remove skb->h.icmphArnaldo Carvalho de Melo2007-04-261-2/+2
| | | | | Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SK_BUFF]: Introduce ip_hdr(), remove skb->nh.iphArnaldo Carvalho de Melo2007-04-261-32/+32
| | | | | Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Abstract out all write queue operations.David S. Miller2007-04-261-1/+1
| | | | | | | This allows the write queue implementation to be changed, for example, to one which allows fast interval searching. Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Convert xtime.tv_sec to get_seconds()James Morris2007-04-261-5/+5
| | | | | | | | Where appropriate, convert references to xtime.tv_sec to the get_seconds() helper function. Signed-off-by: James Morris <jmorris@namei.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: struct *sock argument renamed: sp -> skIlpo Järvinen2007-04-261-11/+11
| | | | | | | In general, TCP code uses "sk" for struct sock pointer. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET] IPV4: Fix whitespace errors.YOSHIFUJI Hideaki2007-02-111-10/+10
| | | | | Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: change layout of ehash tableEric Dumazet2007-02-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ehash table layout is currently this one : First half of this table is used by sockets not in TIME_WAIT state Second half of it is used by sockets in TIME_WAIT state. This is non optimal because of for a given hash or socket, the two chain heads are located in separate cache lines. Moreover the locks of the second half are never used. If instead of this halving, we use two list heads in inet_ehash_bucket instead of only one, we probably can avoid one cache miss, and reduce ram usage, particularly if sizeof(rwlock_t) is big (various CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_LOCK_ALLOC settings). So we still halves the table but we keep together related chains to speedup lookups and socket state change. In this patch I did not try to align struct inet_ehash_bucket, but a future patch could try to make this structure have a convenient size (a power of two or a multiple of L1_CACHE_SIZE). I guess rwlock will just vanish as soon as RCU is plugged into ehash :) , so maybe we dont need to scratch our heads to align the bucket... Note : In case struct inet_ehash_bucket is not a power of two, we could probably change alloc_large_system_hash() (in case it use __get_free_pages()) to free the unused space. It currently allocates a big zone, but the last quarter of it could be freed. Again, this should be a temporary 'problem'. Patch tested on ipv4 tcp only, but should be OK for IPV6 and DCCP. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4/IPV6]: Always wait for IPSEC SA resolution in socket contexts.David S. Miller2007-02-081-1/+1
| | | | | | | | | | Do this even for non-blocking sockets. This avoids the silly -EAGAIN that applications can see now, even for non-blocking sockets in some cases (f.e. connect()). With help from Venkat Tekkirala. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: remove tcp header from tcp_v4_check (take #2)Frederik Deweerdt2007-02-081-6/+6
| | | | | | | | | | | The tcphdr struct passed to tcp_v4_check is not used, the following patch removes it from the parameter list. This adds the netfilter modifications missing in the patch I sent for rc3-mm1. Signed-off-by: Frederik Deweerdt <frederik.deweerdt@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix iov_len calculation in tcp_v4_send_ack().Craig Schlenter2007-01-091-1/+1
| | | | | | | | | | | | This fixes the ftp stalls present in the current kernels. All credit goes to Komuro <komurojun-mbn@nifty.com> for tracking this down. The patch is untested but it looks *cough* obviously correct. Signed-off-by: Craig Schlenter <craig@codefountain.com> Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Trivial fix to message in tcp_v4_inbound_md5_hashLeigh Brown2006-12-181-1/+1
| | | | | | | | The message logged in tcp_v4_inbound_md5_hash when the hash was expected but not found was reversed. Signed-off-by: Leigh Brown <leigh@solinno.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix oops caused by tcp_v4_md5_do_delLeigh Brown2006-12-181-0/+1
| | | | | | | | | md5sig_info.alloced4 must be set to zero when freeing keys4, otherwise it will not be alloc'd again when another key is added to the same socket by tcp_v4_md5_do_add. Signed-off-by: Leigh Brown <leigh@solinno.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix warnings with TCP_MD5SIG disabled.Andrew Morton2006-12-031-4/+4
| | | | | Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Possible cleanups.Adrian Bunk2006-12-031-4/+4
| | | | | | | | | | | | | | | | | | This patch contains the following possible cleanups: - make the following needlessly global functions statis: - ipv4/tcp.c: __tcp_alloc_md5sig_pool() - ipv4/tcp_ipv4.c: tcp_v4_reqsk_md5_lookup() - ipv4/udplite.c: udplite_rcv() - ipv4/udplite.c: udplite_err() - make the following needlessly global structs static: - ipv4/tcp_ipv4.c: tcp_request_sock_ipv4_ops - ipv4/tcp_ipv4.c: tcp_sock_ipv4_specific - ipv6/tcp_ipv6.c: tcp_request_sock_ipv6_ops - net/ipv{4,6}/udplite.c: remove inline's from static functions (gcc should know best when to inline them) Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP] MD5SIG: Kill CONFIG_TCP_MD5SIG_DEBUG.David S. Miller2006-12-031-40/+1
| | | | | | | It just obfuscates the code and adds limited value. And as Adrian Bunk noticed, it lacked Kconfig help text too, so just kill it. Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Split skb->csumAl Viro2006-12-031-2/+2
| | | | | | | ... into anonymous union of __wsum and __u32 (csum and csum_offset resp.) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Fix assorted misannotations (from md5 and udplite merges).Al Viro2006-12-031-1/+1
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP_IPV4]: Use kmemdup where appropriateArnaldo Carvalho de Melo2006-12-031-24/+25
| | | | | | | | | | | | | | | | | Also use a variable to avoid the longish tp->md5sig_info-> use in tcp_v4_md5_do_add. Code diff stats: [acme@newtoy net-2.6.20]$ codiff /tmp/tcp_ipv4.o.before /tmp/tcp_ipv4.o.after /pub/scm/linux/kernel/git/acme/net-2.6.20/net/ipv4/tcp_ipv4.c: tcp_v4_md5_do_add | -62 tcp_v4_syn_recv_sock | -32 tcp_v4_parse_md5_keys | -86 3 functions changed, 180 bytes removed [acme@newtoy net-2.6.20]$ Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* [TCP_IPV4]: CodingStyle cleanups, no code changeArnaldo Carvalho de Melo2006-12-031-68/+75
| | | | | | Mostly related to CONFIG_TCP_MD5SIG recent merge. Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* [NET]: Annotate __skb_checksum_complete() and friends.Al Viro2006-12-031-1/+1
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV6]: Assorted trivial endianness annotations.Al Viro2006-12-031-5/+5
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: MD5 Signature Option (RFC2385) support.YOSHIFUJI Hideaki2006-12-031-30/+643
| | | | | | | Based on implementation by Rick Payne. Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP/DCCP]: Introduce net_xmit_evalGerrit Renker2006-12-031-2/+1
| | | | | | | | | | | | Throughout the TCP/DCCP (and tunnelling) code, it often happens that the return code of a transmit function needs to be tested against NET_XMIT_CN which is a value that does not indicate a strict error condition. This patch uses a macro for these recurring situations which is consistent with the already existing macro net_xmit_errno, saving on duplicated code. Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* [TCP]: Remove dead code in init_sequenceGerrit Renker2006-12-031-2/+2
| | | | | | | | | | | | | | | | This removes two redundancies: 1) The test (skb->protocol == htons(ETH_P_IPV6) in tcp_v6_init_sequence() is always true, due to * tcp_v6_conn_request() is the only function calling this one * tcp_v6_conn_request() redirects all skb's with ETH_P_IP protocol to tcp_v4_conn_request() [ cf. top of tcp_v6_conn_request()] 2) The first argument, `struct sock *sk' of tcp_v{4,6}_init_sequence() is never used. Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Size listen hash tables using backlog hintEric Dumazet2006-12-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | We currently allocate a fixed size (TCP_SYNQ_HSIZE=512) slots hash table for each LISTEN socket, regardless of various parameters (listen backlog for example) On x86_64, this means order-1 allocations (might fail), even for 'small' sockets, expecting few connections. On the contrary, a huge server wanting a backlog of 50000 is slowed down a bit because of this fixed limit. This patch makes the sizing of listen hash table a dynamic parameter, depending of : - net.core.somaxconn tunable (default is 128) - net.ipv4.tcp_max_syn_backlog tunable (default : 256, 1024 or 128) - backlog value given by user application (2nd parameter of listen()) For large allocations (bigger than PAGE_SIZE), we use vmalloc() instead of kmalloc(). We still limit memory allocation with the two existing tunables (somaxconn & tcp_max_syn_backlog). So for standard setups, this patch actually reduce RAM usage. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: One NET_INC_STATS() could be NET_INC_STATS_BH in tcp_v4_err()Eric Dumazet2006-10-201-1/+1
| | | | | | | | I believe this NET_INC_STATS() call can be replaced by NET_INC_STATS_BH(), a little bit cheaper. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Use typesafe inet_twsk() inline function instead of cast.YOSHIFUJI Hideaki2006-10-121-9/+7
| | | | | Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Use TCPOLEN_TSTAMP_ALIGNED macro instead of magic number.YOSHIFUJI Hideaki2006-10-121-1/+1
| | | | | Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: struct inet_timewait_sock annotationsAl Viro2006-09-291-1/+1
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>