DragonFly BSD
DragonFly users List (threaded) for 2005-12
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: DP performance


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Thu, 1 Dec 2005 14:13:31 -0800 (PST)

:...
:> of latency occuring every once in a while would not have any adverse
:> effect.
:
:A few milliseconds of latency / jitter can sometimes completely kill TCP 
:throughput at gigabit speeds.  A few microseconds won't matter, though.
:
:Cheers,
:
:Marko

    Not any more, not with scaled TCP windows and SACK.  A few 
    milliseconds doesn't matter.  The only effect is that you need a
    larger transmit buffer to hold the data until the round-trap ack
    arrives.  so, e.g. a 1 Megabyte buffer would allow you to have 10mS
    of round-trip latency.  That's an edge case, of course, so to be safe
    one would want to cut it in half and say 5 mS with a 1 megabyte buffer.

    TCP isn't really the problem, anyway, because it can tolerate any 
    amount of latency without 'losing' packets.  So if you have a TCP
    link and you suffer, say, 15 ms of delay once every few seconds, the
    aggregate bandwidth is still pretty much maintained.

    The real problem with TCP is packet backlogs appearing at choke points.
    For example, if you have a GigE LAN and a 45 MBit WAN, an incomming
    TCP stream from a host with an aweful TCP stack (such as a windows 
    server) might build up a megabyte worth of packets on your network
    provider's border router all trying to squeeze down into 45 MBits.
    NewReno, RED, and other algorithms try to deal with it but the best
    solution is for the server to not try to push out so much data in the
    first place if the target's *PHYSICAL* infrastructure doesn't have
    the bandwidth.  But that's another issue.

					-Matt
					Matthew Dillon 
					<dillon@xxxxxxxxxxxxx>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]