DragonFly kernel List (threaded) for 2008-04
DragonFly BSD
DragonFly kernel List (threaded) for 2008-04
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

FairQ ALTQ for PF - Patch #1


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Thu, 3 Apr 2008 21:28:22 -0700 (PDT)

    Ok, This is my first attempt at adding a fairq feature to ALTQ/PF.
    It isn't perfect yet, but it appears to work reasonably well.

	fetch http://apollo.backplane.com/DFlyMisc/fairq01.patch

    It isn't hierarchical (at least not yet), but you can specify multiple
    queues for each interface as long as you give them different priorities.

    Here is an example configuration:

altq on vke0 fairq bandwidth 500Kb queue { normal, fair }
queue fair priority 1 bandwidth 100Kb fairq(buckets 64) qlimit 50
queue normal priority 2 bandwidth 400Kb fairq(buckets 64, default) qlimit 50

pass out on vke0 inet proto tcp from any to any keep state queue normal
pass out on vke0 inet proto tcp from any to 216.240.41.28 keep state queue fair

    Here is how it works:

    * The queues are scanned from highest priority to lowest priority.

    * If the packet bandwidth on the queue does not exceed the bandwidth
      parameter and a packet is available, a packet will be chosen from
      that queue.

    * If a packet is available but the queue has exceeded the specified
      bandwidth, the next lower priority queue is scanned (and so forth).

    * If NO lower priority queues either have packets or are all over the
      bandwidth limit, then a packet will be taken from the highest priority
      queue with a packet ready.

    * Packet rate can exceed the queue bandwidth specification (but
      will not exceed the interface bandwidth specification, of course),
      but under full saturation the average bandwidth for any given
      queue will be limited to the specified value.

    Here is how the fair queueing works:

    * You MUST specify 'keep state' in the related rules.

    * keep state 'connections' will be given a fingerprint hash code which
      will be used to enqueue the mbuf in one of the N buckets (64 in our
      example) for each fair queue.

    * When PF request's a packet from the fairq, a packet will be selected
      from each of the 64 buckets in a round-robin fashion.

      Thus if you have a very hungy connection, it will not be able to
      steal all the bandwidth (or queue up tons of packets to the actual
      interface) from other connections within the queue.

    Caveats and issues:

    (1) The qlimit is per-bucket.  So 64 buckets x 50 packets is, worst case,
	3200 packets.  It's unlikely this would ever occur, but it's an issue
	that I haven't dealt with yet.

    (2) Due to limitations on the number of buckets, multiple connections
	can end up in the same bucket.  If one of those connections is a
	heavy hitter, the others will suffer.

	This could probably be fixed with further sorting or perhaps a
	different topology (e.g. like a tree instead of a fixed array).

    Please Test!  I have this running on my router box right now and
    it appears to work very well.

						-Matt




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]