summaryrefslogtreecommitdiffstats
path: root/net/ipv4/tcp_input.c
diff options
context:
space:
mode:
authorIlpo Järvinen <ilpo.jarvinen@helsinki.fi>2007-10-02 00:28:17 +0200
committerDavid S. Miller <davem@sunset.davemloft.net>2007-10-11 01:53:59 +0200
commit0e835331e3111e5a92eb3a852405ea71ca8fff97 (patch)
treee7c1445866cf4ed306ffd39e1fd520f2b761566a /net/ipv4/tcp_input.c
parent[TCP]: fix comments that got messed up during code move (diff)
downloadlinux-0e835331e3111e5a92eb3a852405ea71ca8fff97.tar.xz
linux-0e835331e3111e5a92eb3a852405ea71ca8fff97.zip
[TCP]: Update comment of SACK block validator
Just came across what RFC2018 states about generation of valid SACK blocks in case of reneging. Alter comment a bit to point out clearly. IMHO, there isn't any reason to change code because the validation is there for a purpose (counters will inform user about decision TCP made if this case ever surfaces). Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4/tcp_input.c')
-rw-r--r--net/ipv4/tcp_input.c11
1 files changed, 9 insertions, 2 deletions
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 904289d2b6bb..c1339d88bbf3 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1027,8 +1027,15 @@ static void tcp_update_reordering(struct sock *sk, const int metric,
* SACK block range validation checks that the received SACK block fits to
* the expected sequence limits, i.e., it is between SND.UNA and SND.NXT.
* Note that SND.UNA is not included to the range though being valid because
- * it means that the receiver is rather inconsistent with itself (reports
- * SACK reneging when it should advance SND.UNA).
+ * it means that the receiver is rather inconsistent with itself reporting
+ * SACK reneging when it should advance SND.UNA. Such SACK block this is
+ * perfectly valid, however, in light of RFC2018 which explicitly states
+ * that "SACK block MUST reflect the newest segment. Even if the newest
+ * segment is going to be discarded ...", not that it looks very clever
+ * in case of head skb. Due to potentional receiver driven attacks, we
+ * choose to avoid immediate execution of a walk in write queue due to
+ * reneging and defer head skb's loss recovery to standard loss recovery
+ * procedure that will eventually trigger (nothing forbids us doing this).
*
* Implements also blockage to start_seq wrap-around. Problem lies in the
* fact that though start_seq (s) is before end_seq (i.e., not reversed),