# What hardware would FreeBSD need to be effective SMB 10Gbe router?



## Avery Freeman (Jul 3, 2018)

Hey,

I've got a D-Link DGS-3427 24-port switch with 10GBase-CX4 uplinks I'd like to use to start a 10G network.  The problem is, most modern network hardware uses either (Q)SFP+ or 10GBase-T.  So I looked into media converters for CX4 to RJ45 and was absolutely shocked at how appallingly expensive they are.

I was thinking instead of using media converters I could just outfit an amd64 server with both types of 10GB NICs in it with some hardware I have lying around.  But I'm not really sure what kind of processing power would be necessary for that kind of bandwidth.

I currently have a cheap Lenovo J1800 motherboard (literally was $15) running pfSense 2.4 on a couple SLC mSATA SSDs mirrored in ZFS with a Supermicro 4-port PCIe x8 NIC that barely ever goes past 10% of processor utilization.  I might use a VPN client with it now or then but I don't need to serve VPN connections or do anything fancy beyond just filter and route packets. 

iperf tests are 930Mbps-ish TCP send/receive across 1GBase-T network if my memory serves me correctly (haven't tested it in a while). 

My environment is small - 10-15 VMs and 3-5 physical computers running at a time max, two of the VMs are WS2012 DC, one on each of two E3-1230v2 servers running ESXi.  10GBase-T or CX4 would likely just be used for the servers to start with, and I'd make 10GBase-T upgrades to the desktop machines over time.

Could I get away with the J1800 board if I moved up to 10GBE and that's all it's used for, or should I add another E3 v2 boards I have lying around for it, and why?

Thanks!

Edit:  Just to be clear, I am not asking for pfSense support, I am considering moving from pfSense to FreeBSD for this project.


----------



## Phishfry (Jul 3, 2018)

Chelsio 10G CX4 S320E cards are cheap and abundant on ebay.
Problem with the SOC like J1800 is PCIe lanes.
You want minimum x8 PCI2.0 for 10Gbe cards.
Look hard at the actual wiring or electrical spec for your PCIe slot.
Many SOC are quite pathetic like single x4 electrical lanes. Sometimes in an x16 slot.
That alone is reason to use E3 Xeon. Plus AES and VT-d


----------



## Phishfry (Jul 3, 2018)

I considered trying a new Supermicro SOC board with 4 Intel LAN. Reasonably priced.
https://www.supermicro.com/products/motherboard/X11/X11SBA-LN4F.cfm
Now look at the PCIe slot. It uses x8 slot with x1 electrically. Wow.
Can you imagine putting 10G on x1 slot. Whats the sense huh.
I am going through the same thing with Chelsio T420-BT2 10G-baseT cards.


----------



## Avery Freeman (Jul 3, 2018)

Phishfry said:


> Chelsio 10G CX4 S320E cards are cheap and abundant on ebay.
> Problem with the SOC like J1800 is PCIe lanes.
> You want minimum x8 PCI2.0 for 10Gbe cards.
> Look hard at the actual wiring or electrical spec for your PCIe slot.
> ...



Oh, duh - PCIe lanes.  I totally should have thought of that.

So probably worth it to use my X9SCL-F which has two PCIe 3.0 x8 slots which would use the E3's full 16 lanes.  That seems like a pretty tight fit, but getting more than that would require getting new hardware, which is what I'm trying to avoid. 

Edit: I looked this up and it was wrong, the C202 chipset in the X9SCL-F has an eight PCIe lane limit.  Oh well, it's better than the x1 speed of my J1800's x16 slot. 

Cool, well thanks for the feedback!  Have you looked at any of the Denverton boards?  I think they have a limit of PCIe x4 on their expansion port, but most do have integrated 10Gbe.  Servethehome has a lot of great information:   https://www.servethehome.com/tag/denverton/

The cheapest 2-core boards are under $300 and the 16-core boards are around $1000 (they don't have hyperthreading).  Power consumption specs I've seen for the 16 core were 50w under load 25w idle.  Awesome features like assignable PCIe groups and SR-IOV for SoC peripherals.


----------



## Avery Freeman (Jul 3, 2018)

I wonder if just a cable like this would work? 

https://www.datastoragecables.com/cx4/CX4-SFP+/

I'm excited looking at hardware, though.  Probably still take my X9SCL-F and make a router out of it with a 4-port Pro 1000 and an X540-T2.  I have gigabit internet which could scale up and my cable modem has four 1G RJ45 ports.  Would like to get the most out of it that I can.


----------



## Phishfry (Jul 3, 2018)

I thought you have CX4 on the switch? So CX4 to CX4. The Chelsio's I referenced are old T3 series and uses cxgb driver.
The cable you are showing is CX4 to SFP+ MiniGBIC...Not CX4 to CX4
You don't need anything so fancy. CX4 cables are cheap.
I must say I have never used CX4. They are so cheap I have been tempted.


----------



## Phishfry (Jul 3, 2018)

As I understand CX4 copper was originally used for Infiband. This was a storage medium, not networking.
Now we have mixed mode CX4 which means it can do both. The Chelsio adapter SC320E-CXA is mixed mode.
Alot of the ebay cards are NetApp OEM cards.
So the thing you need to look at is your switch.
From my ebay D-Link DGS-3427 search, the 10G modules in the back take clip/latch style CX4 connectors.
Some CX4 cable ends are different.
They use clips for fastening the connector instead of screw post like some CX4 cables from ebay.
The Chelsio cards also use the latching style cx4 and not the screw type. So both ends need to be the same.
Here is what you need:
https://www.ebay.com/itm/113058007470
https://www.ebay.com/itm/253015479686


----------



## Phishfry (Jul 4, 2018)

I see two different models for the Chelsio CX4 cards. N320 and S320. Both are based on Terminator3 chip but N320 is networking and S320 is a storage adapter. So I am not sure if these are mixed mode. Maybe buy a single N320 and one cable and try them.


----------



## Phishfry (Jul 5, 2018)

I noticed that the FreeBSD page covers Mellanox cards.
Interesting that you can set an sysctrl for the mlx4 driver to switch between infiband and ethernet mode.
https://wiki.freebsd.org/InfiniBand


----------



## Avery Freeman (Jul 5, 2018)

Phishfry said:


> I noticed that the FreeBSD page covers Mellanox cards.
> Interesting that you can set an sysctrl for the mlx4 driver to switch between infiband and ethernet mode.
> https://wiki.freebsd.org/InfiniBand



That's cool, I'll see if they are in the ESXi HCL, if I'm buying new stuff I'll probably try and stick to stuff that's supported.  I'm definitely not going with Infiniband because its support under ESXi has all but been removed.  I know X540-T2 is well supported so it's likely to be one of the two 10Gb NICs.

I am still trying to game this out so I apologize if I didn't explain the reasoning behind the CX4-SFP+ cable (which, I still haven't found, only QSFP+). It could pretty much eliminate the need for having the router have two different 10GBe cards.  So big-picture could be something like:

 1GB internet w/ 4-port LAGG modem --->> 4-port x8 lane NIC on FreeBSD router w/ CX4 NIC --->> DGS-3427 uplink #1 -- uplink #2 --->> CX4 to SFP+ -->> 10GBase-T to computers, servers, etc.

But I was originally thinking something that wouldn't require an adapter cable, such as: 

1GB internet w/ 4-port LAGG modem --->> 4-port x4 lane NIC on FreeBSD router w/ CX4 NIC  --->> DGS-3427 uplink 
and also SFP+ NIC -->> 10Gbase-T switch

Thanks for the good ideas for cards to look out for.  Any info on SR-IOV support?  I am thinking if I'm going to be throwing all this compute power at my router I'm going to virtualize a few things on it.

Thanks


----------



## Phishfry (Jul 6, 2018)

I remember pfSense all abuzz about QAT and SR-IOV being the best thing since whipped cream.
https://forums.freebsd.org/threads/sr-iov-and-igb-driver.62069/
Looks like the 10G interfaces SR-IOV don't work either.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229435

Coming soon:
https://www.servethehome.com/quickassist-driver-freebsd-pfsupport-coming/


----------



## Avery Freeman (Jul 9, 2018)

Gah! 

OK well most feedback I've gotten for using a computer for routing says that it's flawed, in that it will be far slower than dedicated equipment. 

Therefore, I've focused more on just getting a cable that will allow me to connect my two switches.  

I found the CX-4 to SFP+ adapter cable that I sent the link to earlier but it may not be electrically compatible - I am going to email the site to see what they say.  

If it's not, it looks like I'll have to get an XFP expander for my 24-port 1GbE switch (the DGS-3427) -- but, I can still cheap out on CX-4 cards for the gateway/firewall computer. 

Re: Chelsio cards, you mentioned you have the T420-BT2, so that's PCIe 2.0 x8, right?  I am trying to figure out their naming convention.  I see the S310E-xx are single-port PCIe 1.0 x8 and S320E-xx are two-port -- so I'll probably just get the S310E-CXA and connect that to the DGS-3427 unless it makes more sense to get the -CR and connect that to the SFP+ uplink on the XS708E and then go from 10GBase-T to CX-4...  I'll have to get more info on which is most electrically compatible with CX-4 (10Gbase-T or SFP+) 

Is there any circumstance you can imagine where I would need a PCIe 2.0 card?  8-lanes of PCIe 1.0 has a theoretical bandwidth of 20GBps (not Gbps).   Seems like it should be fast enough, unless I'm missing something.  I think the T520-xx is PCIe 3.0.  WTF would anybody need that for 10GbE?

Oooh, they have x16 QSFP+ 100GbE cards... *drool*  I'm sure that'll be a slow standard in like a decade...

Now to try and do some benchmarks with different software/OS.  So far, I'm thinking of trying:

FreeBSD 12.0-CURRENT
OpenBSD 6.3
pfSense 2.4.3
IPFire 2.15
OpenSUSE leap 15 w/ BPF
Debian Stretch w/ Cilium


My test computers will be:
Supermicro X9SCL-F w/ E3-1230v2 (4 cores / 8 threads)
Intel DH67GD w/ i5-3570k (4 cores, no HT) 

I was actually thinking of trying the DH67GD first since the 3570k has no hyperthreading - I read somewhere in FreeBSD that hyperthreading should be disabled for routing performance, although I guess I could just disable it in the BIOS.  

Mostly, the difference between the two is one processor tops out at 65w and the other 77w, but besides that they're mostly identical.  It'll be interesting to see how raw single-core performance affects throughput.

I was thinking when I got into this that I could get by with something governed around 20w like an E3-1220l, but the more I read about this the less likely that seems to be the case...


----------

