DragonFly users List (threaded) for 2008-02
DragonFly BSD
DragonFly users List (threaded) for 2008-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Dragonfly Routers


From: "Adrian Chadd" <adrian@xxxxxxxxxxx>
Date: Sat, 23 Feb 2008 15:31:19 +0900

On 20/02/2008, Bill Hacker <wbh@conducive.org> wrote:

>  Routing and firewalling is a specialty that has become a very
>  high-volume hardware/ASIC/RTOS field where any router a PC could at one
>  time match on speed has become so cheap and flexible off-the-shelf it is
>  no longer worth the bother to roll yer own *and maintain it* for any
>  serious throughput.

Thing is, there are people who report doing 10ge (almost) line rate 64
byte pps on current PC hardware, with the "right" combination of PCIe,
decent chipsets and crazy tuned forwarding code complete with
prefetching.

Its just not being done in open source.

I've seen a few 10ge PC routers recently, which can do 10ge at 1500
byte frames with out of the box Linux/FreeBSD. Its not that hard.

What the PC/FOSS world is missing is a small group of people focused
on high pps throughput code. There's no reason a current generation PC
with PCIe, MSI, and decent eth drivers can't push > mil pps. The bus
and cards certainly support it.

Now, to answer the GP's question - the reason its "bad" to overload
PCI busses requires understanding of the actual PCI architecture. I
don't pretend to be an EE, but desktop motherboards tended to whack
all the PCI slots on the same bus so they're effectively on the same
channel, limiting throughput. Reading/writing data on the PCI bus also
has minimum latencies regardless of data transfer size, so even in
burst mode you could only do a fixed amount of transactions per bus
before things got out of hand.

Empirically, on 32 bit PCI; 33mhz (standard PCI); you can push a
single gige NIC to about 70,000 pps. Thats 35,000 in, 35,000 out. I
couldn't get it faster than that on FreeBSD. I -think- that was
saturating the PCI bus, I'd have to sit down and do the math to see
exactly what the ideal situation is. PCIx, et al are "just" faster
with some restrictions (I think PCIx stated that each port must be on
a seperate bridge, no hubbing allowed) and PCIe is a completely new
beast entirely.

If someone would like a fun project, sit down with some decent gige
hardware thats well documented (eg the e1000 cards) and see how fast
you can tx/rx packets from the driver. Ignore doing routing table
lookups, queueing, touching mbufs, etc. The results might be fun. :)

2c,


Adrian

-- 
Adrian Chadd - adrian@freebsd.org



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]