DragonFly BSD
DragonFly users List (threaded) for 2005-03
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Re[2]: PCIX confusion


From: EM1897@xxxxxxx
Date: Mon, 14 Mar 2005 10:32:39 -0500

In a message dated 3/14/2005 4:07:00 AM Eastern Standard Time, Gabriel Ambuehl <gaml@xxxxxx> writes:

>Hi Boris Spirialitious,
>you wrote.
>
>BS> But what you say about pciE makes me think you
>BS> do not know about hardware either. only pcie x1
>BS> cards are available, which is not fast. What
>BS> pciE cards have you tested? Why would you buy
>BS> machine with pciE? Just because it new? It no
>BS> make sense what you say.
>
>
>Actually, seeing that PCIe is point to point, the bandwidth offered by
>even PCIe 1x is vastly superior to conventional PCI. And once you get
>to the 4x or even 8x range, much less 16x (which pushes more data than
>the FSB on many current CPU handles, IIRC), you outperform PCIX with ease.
>Electrically, the links are much easier to route which should lower
>board prices (no more 8 layer server boards) and increase stability,
>even.
>
>Lack of contention and lower latency on point to point links will also
>prove beneficial over the long term

Gabe, gabe, gabe. You have quite an imagination. As long as
your CPU has a dedicated path to each device and you don't
need to access stuff like say, memory, you are in!

The reality of today is x1 ethernet, so thats where any 
discussion has to be. You also have to consider the MB 
design and your requirements. One PCI-E MB that I have
used has the PCI-E bandwidth shared with the PCI-X bus. 
So its a more complicated discussion than just PCI vs
PCI-X, which is just a slam dunk.





[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]