Re: FBSD 1GBit router?
--- Ingo Flaschberger <firstname.lastname@example.org> wrote:
> >> I have a 1.2Ghz Pentium-M appliance, with 4x[/color][/color]
> 32bit, 33MHz pci intel e1000[color=green][color=darkred]
> >> cards.
> >> With maximum tuning I can "route" ~400mbps with[/color][/color]
> big packets and ~80mbps[color=green][color=darkred]
> >> with 64byte packets.
> >> around 100kpps, whats not bad for a pci[/color][/color]
> >> To reach higher bandwiths, better busses are[/color][/color]
> >> pci-express cards are currently the best choice.
> >> one dedicated pci-express lane (1.25gbps) has[/color][/color]
> more bandwith than a whole[color=green][color=darkred]
> >> 32bit, 33mhz pci-bus.[/color]
> > Like you say routing 400 Mb/s is close to the max[/color]
> of the PCI bus, which[color=green]
> > has a theoretical max of 33*4*8 ~ 1Gbps. Now[/color]
> routing is 500Mb/s in, 500Mb/s[color=green]
> > out. So you are within 80% of the bus-max, not[/color]
> counting memory-access and[color=green]
> > others.[/color]
> > PCI express will give you a bus per PCI-E device[/color]
> into a central hub, thus[color=green]
> > upping the limit to the speed of the FrontSideBus[/color]
> in Intel architectures.[color=green]
> > Which at the moment is a lot higher than what a[/color]
> single PCI bus does.
> Thats why my next router will be based at this box:
> Hopefully there will be direct memory bus connected
> nic's in future.
> (HyperTransport connected nic's)
> > What it does not explain is why you can only get[/color]
> 80Mb/s with 64byte packets,[color=green]
> > which would suggest other bottlenecks than just[/color]
> the bus.[/color]
To clarify this, you seem to leave out PCI-X, which is
an 8Mb/s bus, which is certainly able to route or
bridge a gigabit of traffic. PCIe 4x has an unencoded
data rate of 16Gb/s and PCIe 8x is 64Gb/s, which is
more than enough to do gigabit and 10gig.
PCI can BURST to 1Gb/s and PCI-X can BURST to 8Gb/s,
but bursts are limited, so its not possible to achieve
full bandwidth. The more devices on the bus, the less
throughput you'll get due to contention.
A more limiting factor for routing is packets/second,
not the actual throughput. Its foolhardy to use a NIC
that doesn't have enough bandwidth, so "usually", the
limiting factor is the CPUs ability to process the
packets regardless of their size.
We bridged 1 million PPS on a FreeBSD 4.x machine with
a 2.8Ghz opteron and PCI-x intel cards. A 7.0 system
can do about 20% less (of course you don't lose the
keyboard on a 7.0 system as you do on the 4.x
system!). So I'd assume a 3Ghz xeon could likely do
close to 1 million pps on a 7.0 system. Routing will
be a bit slower.
Also be advised that implementation is an issue with
bus througput. I've tested systems with both PCIe and
PCIx and the PCIx gave higher thoughput even though
the PCIe is theoretically faster. MB/chipset design is
a factor in the utilization capability of the bus.
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now. [url]http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ[/url]
[email]email@example.com[/email] mailing list
To unsubscribe, send any mail to "firstname.lastname@example.org"