DragonFly users List (threaded) for 2006-11
DragonFly BSD
DragonFly users List (threaded) for 2006-11
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: dual port EM nic wedging under load


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Sun, 26 Nov 2006 08:42:43 -0800 (PST)

:Using polling and fastfwd on, I am able to get about 300Kpps in a 
:unidirectional blast and still see that rate even with 10 poorly 
:written ipfw rules !?!

    Are you sure the ipfw ruleset is being executed?  :-) ... whenever
    something looks to be too good to be true....

:Also it handles the load quite smoothly for the bi-directional test
:
:Here is the output of ifstat -b as seen from the box acting as router
:
:You can see the first stream starting up, and then the second on the 
:opposite stream.  Rates remain constant throughout, which is quite 
:different from FreeBSD. ipfw on Dragonfly has no ill effect for some reason.
:...

    I'm not sure if you are overloading the system intentionally or not.
    There seem to be an excessive number of ENOBUF issues during packet
    processing, assuming the count start at 0 when the test began.

:em0: Missed Packets =    221709682	<<<<<<
:em0: Receive No Buffers = 53994896	<<<<<< This seems excessive to me
:em0: Receive length errors = 0
:em0: Receive errors = 0
:em0: Crc errors = 0
:em0: Alignment errors = 0
:em0: Carrier extension errors = 0
:em0: RX overruns = 55192		<<<<<<
:...
:em0: Good Packets Rcvd = 187109273
:em0: Good Packets Xmtd =  30553884

:em1: Missed Packets =     17508539	<<<<<<
:em1: Receive No Buffers = 14148594	<<<<<<
:em1: Receive length errors = 0
:em1: Receive errors = 0
:em1: Crc errors = 0
:em1: Alignment errors = 0
:em1: Carrier extension errors = 0
:em1: RX overruns = 0			<<<<<<
:em1: Good Packets Rcvd =  30553871
:em1: Good Packets Xmtd = 187031079

    One really nice thing about DragonFly's NETIF polling code is that you
    can control the polling frequency (kern.polling.pollhz) and other
    parameters on the fly with sysctl's.  You can actually change the polling
    frequency on a live system!

    Try playing with the parameters.  My guess is that the burst rate and/or
    the frequency is too low.  I don't know how high a polling frequency
    you can set and have the system still be efficient, but I think it is
    worth playing with both parameters (kern.polling.each_burst and
    kern.polling.pollhz) to find optimal values.  5000hz would be around
    200us, and you could try increasing kern.polling.each_burst to 20...
    the system should be able to support that.

    If there is still a ENOBUFS issue it could be related to the object
    cache parameters used to initialize the mbuf system.  We might not
    have enough mbuf clusters.  It is also possible that the ENOBUFS was
    a burst issue at the beginning of the test... the objcache does try
    to adjust to load to some degree.

kern.polling.burst: 5
kern.polling.each_burst: 5
kern.polling.burst_max: 150
kern.polling.user_frac: 50
kern.polling.reg_frac: 20
kern.polling.short_ticks: 0
kern.polling.lost_polls: 0
kern.polling.pending_polls: 0
kern.polling.residual_burst: 0
kern.polling.handlers: 0
kern.polling.enable: 0
kern.polling.phase: 0
kern.polling.suspect: 0
kern.polling.stalled: 0
kern.polling.pollhz: 2000

    I expect the smooth flow is probably due to the polling burst size being
    too small.  You get smooth flow, but the processing isn't necessarily
    going to be very efficient.

    This is all on UP of course.  The BGL is still present in the network
    path so SMP isn't going to help a whole lot on DragonFly yet.

					-Matt
					Matthew Dillon 
					<dillon@xxxxxxxxxxxxx>




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]