# NIC teaming using FreeBSD 9-stable via crossover



## papelboyl1 (Mar 2, 2012)

I have four server-quality NICs here (two gigabit ports per card. Intel-based and uses PCIe 4x). I want to install one NIC in a my gaming PC and the other NIC in my fileserver (will be using FreeBSD 9-stable). I don*'*t have the server set up yet because my board died and I'm waiting for the new one I bought online.

I'm hoping for faster file transfer between my gaming and file server. So I'm wondering if what I want is possible?

Thank you.


----------



## SirDice (Mar 2, 2012)

papelboyl1 said:
			
		

> I'm hoping for faster file transfer between my gaming and file server.


You'll get pretty much the same speed if both machines are connected to the same switch. It would only make a real difference if there are more machines on that switch and they're all pushing a lot of bandwidth.



> So I'm wondering if what I want is possible?


Yes, it's possible but I doubt there will be a speed increase.


----------



## papelboyl1 (Mar 2, 2012)

SirDice said:
			
		

> You'll get pretty much the same speed if both machines are connected to the same switch. It would only make a real difference if there are more machines on that switch and they're all pushing a lot of bandwidth.
> 
> 
> Yes, it's possible but I doubt there will be a speed increase.



Well I'm hoping to avoid using the switch because my switch doesn't have teaming features. It's your regular unmanaged switch. Hence the reason why I mentioned crossover.


----------



## SirDice (Mar 2, 2012)

papelboyl1 said:
			
		

> Well I'm hoping to avoid using the switch because my switch doesn't have teaming features. It's you regular unmanaged switch. Hence the reason why I mentioned crossover.



Makes sense. Should work, networking doesn't require a switch, only if you want to connect more than two machines :e

Handbook: 32.6 Link Aggregation and Failover


----------



## phoenix (Mar 2, 2012)

Note:  NIC bonding/teaming/aggregation will not make a single file transfer faster.  A single TCP connection between two hosts will only go across 1 NIC, so you still top out at 1 Gbps per transfer.  The only benefit you get from bonding/teaming/aggregation is that you can do multiple transfers at full wirespeed.  IOW, you can't transfer 1 file at 2 Gbps.  But you can transfer 2 files, each at 1 Gbps.

Note also, that depending on the bonding/aggregation protocol you use and how it hashes connections, all transfers between two hosts may go across a single NIC, meaning you may still be limited to 1 Gbps.

NIC bonding/aggregation is really only useful for fail-over setups (one NIC dies, so other one takes over automatically) or for one-to-many setups (server has bonded NIC so can do multiple full-speed transfers to separate clients).  Direct connections between two systems via cross-over cables most likely won't gain you anything speed-wise.


----------



## bbzz (Mar 2, 2012)

What about per-packet load-balancing? I know this is mostly deprecated but can it be setup?


----------



## jalla (Mar 2, 2012)

In my experience with link-aggregation from other systems, interface selection is simply done with something like intout = (src-ip+dst-ip) mod N, where N is number of available interfaces.
Is FreeBSD actually smarter? (adding portnumber into the calculation perhaps?)

In theory you can also use round-robin for load-balancing, but that will introduce a problem with packets out-of-order and possibly just make things worse.


----------



## papelboyl1 (Mar 3, 2012)

*S*et up my Windows7 PC to use dynamic link aggregation with IP. 192.168.2.15. I'm using Intel drivers (from their website).

I set up my FreeBSD box with:

```
ifconfig_em0="up"
ifconfig_em1="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport em0 laggport em1 192.168.2.13/24"
```
I can transfer files using scp (via winscp software) but transfer speed varies between 4 to 5MBps.

I tried using jumbo frames. 4088 and 9014 are the available choices in Windows so I set the MTU of the Intel interfaces to one of those number. The MTU is set to match between my Windows and FreeBSD PCs. Transfer speed went all the way to less than 100KBps. And the link isn't very stable.

I'm using regular CAT6 cables (non-crossover) but I don't think this matters.

Any suggestions?


----------



## Uniballer (Mar 3, 2012)

Those numbers sound like you've saturated a CPU.  Why are you benchmarking with scp (it uses a lot of CPU time to encrypt/decrypt data)?  What hardware are you running?

I have an AMD Phenom II X4 840 (the one with no L3 cache) running 8-stable serving Samba shares to Windows clients.  A Win7 Llano A8-3850 box 2 gigabit switches away averages over 25MBps writing to the share for Windows Backup.  Reading is generally faster.  Switches are Linksys SGE-2000P with jumbo frames disabled.  Both systems are using on-board NICs (probably RealTek - nothing fancy).  Performance seems to be limited mainly by disk/filesystem performance so link aggregation is unlikely to provide any improvement.


----------



## papelboyl1 (Mar 3, 2012)

core2 E4500 for FreeBSD. FX8120 for the gaming PC. I'll try samba shares tomorrow. Just tried from trying things out. 

thanks for the reply though.


----------

