DragonFly BSD
DragonFly kernel List (threaded) for 2005-07
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: net.inet.tcp.inflight_enable

From: Noritoshi Demizu <demizu@xxxxxxxxxxxxxx>
Date: Tue, 12 Jul 2005 19:52:24 +0900 (JST)

>     It's still not operating quite as I would expect.  Could you compare
>     the results for a very long transfer, like 100 MBytes ?  If those numbers
>     are closer then it may well be that all I really need to do is give it
>     a better initial bandwidth estimate.

Well, I'm not interested in comparing long transfers.  The reason is
that, if we try long transfers, congestion window of TCP connection
without BDP limiting would grow fast and the TCP connection will
suffer packet losses soon.  After packet losses, TCP throughput is
dominated by retransmission and recovery algorithms.  So, I do not
think we can evaluate the effect of BDP limiting with long transfers.

In my understanding, BDP limiting tries to estimate bandwidth-delay
product and tries to avoid injecting too much data segments into
networks.  If so, the right points to evaluate the implementation
of BDP limiting would be as following:

  (1) Does snd_bwnd has a good estimation of bandwidth-delay product?
  (2) Does TCP avoid injecting too much data?

By the way, in my experiences, bandwidth-delay product is
100Mbit/s * 20ms = 250KB (= about 170 MSS).
If bandwidth-delay product was estimated correctly (i.e., if snd_bwnd
was correct), snd_cwnd should grow exponentially until it grows around
170 MSS.  Then, snd_bwnd would cap the amount of data to be sent in
order to avoid bottleneck queue overflow.  If BDP limiting worked as
this, throughput would be improved.

However, in my experiences, snd_cwnd was capped by 3,4 or 5 MSS.
These values are too small comparing to the actual BDP (~170 MSS).
That is why I think snd_bwnd did not contain a good estimation
in my experiences.

Noritoshi Demizu

[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]