瀏覽代碼

tcp: reduce skb overhead in selected places

tcp_add_backlog() can use skb_condense() helper to get better
gains and less SKB_TRUESIZE() magic. This only happens when socket
backlog has to be used.

Some attacks involve specially crafted out of order tiny TCP packets,
clogging the ofo queue of (many) sockets.
Then later, expensive collapse happens, trying to copy all these skbs
into single ones.
This unfortunately does not work if each skb has no neighbor in TCP
sequence order.

By using skb_condense() if the skb could not be coalesced to a prior
one, we defeat these kind of threats, potentially saving 4K per skb
(or more, since this is one page fragment).

A typical NAPI driver allocates gro packets with GRO_MAX_HEAD bytes
in skb->head, meaning the copy done by skb_condense() is limited to
about 200 bytes.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet 8 年之前
父節點
當前提交
60b1af3300
共有 2 個文件被更改,包括 2 次插入2 次删除
  1. 1 0
      net/ipv4/tcp_input.c
  2. 1 2
      net/ipv4/tcp_ipv4.c

+ 1 - 0
net/ipv4/tcp_input.c

@@ -4507,6 +4507,7 @@ add_sack:
 end:
 end:
 	if (skb) {
 	if (skb) {
 		tcp_grow_window(sk, skb);
 		tcp_grow_window(sk, skb);
+		skb_condense(skb);
 		skb_set_owner_r(skb, sk);
 		skb_set_owner_r(skb, sk);
 	}
 	}
 }
 }

+ 1 - 2
net/ipv4/tcp_ipv4.c

@@ -1556,8 +1556,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
 	 * It has been noticed pure SACK packets were sometimes dropped
 	 * It has been noticed pure SACK packets were sometimes dropped
 	 * (if cooked by drivers without copybreak feature).
 	 * (if cooked by drivers without copybreak feature).
 	 */
 	 */
-	if (!skb->data_len)
-		skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
+	skb_condense(skb);
 
 
 	if (unlikely(sk_add_backlog(sk, skb, limit))) {
 	if (unlikely(sk_add_backlog(sk, skb, limit))) {
 		bh_unlock_sock(sk);
 		bh_unlock_sock(sk);