# Netflix report on their work with FreeBSD and HTTPS



## drhowarddrfine (Apr 16, 2015)

https://lists.w3.org/Archives/Public/www-tag/2015Apr/0027.html
https://people.freebsd.org/~rrs/asiabsd_2015_tls.pdf
https://2015.asiabsdcon.org/timetable.html.en#P7A



> I'm pleased to report we have made good progress on that and we presented our FreeBSD work at the Asia BSD conference. We now believe we can deploy HTTPS at a
> cost that, whilst significant, is well justified by the privacy returns for
> our users.
> 
> ...


----------



## roddierod (Apr 16, 2015)

I would love something like this geared toward marketing and managerial types.


----------



## SirDice (Apr 17, 2015)

Both ArsTechnica and The Register have articles about it:
http://www.theregister.co.uk/2015/04/17/netflix_house_of_cards_fortified_with_https/
http://arstechnica.com/security/201...-will-soon-use-https-to-secure-video-streams/

A little lite on details but this is nice to know:


> The Netflix OpenConnect Appliance is a server-class computer based on an Intel 64bit Xeon CPU and running FreeBSD 10.1 and Nginx 1.5. Each server is designed to hold between 10TB and 120TB of multimedia objects, and can accommodate anywhere from 10,000 to 40,000 simultaneous long-lived TCP sessions with customer client systems. The servers are also designed to deliver between 10Gbps and 40Gbps of continuous bandwidth utilisation. Communication with the client is over the HTTP protocol, making the system essentially into a large static-content web server.



Does anybody know what kind of network cards they're using? I'd really like to know which ones are capable of sustaining that 10-40 Gbps traffic.


----------



## usdmatt (Apr 17, 2015)

> Does anybody know what kind of network cards they're using? I'd really like to know which ones are capable of sustaining that 10-40 Gbps traffic.



https://openconnect.itp.netflix.com/hardware/index.html
Click on the updated storage or i/o appliance for full specs.

Spoiler: Chelsio quad-port 10Gig using LAGG


----------



## usdmatt (Apr 17, 2015)

> "It’s not clear why that was, but I’m guessing it had to do with the way their servers were configured, the types of cipher suites they were using, lack of hardware, etc.," Matt Green, a Johns Hopkins University professor and encryption expert, told Ars. "The fact that they’ve made so much progress in only six months probably means that the improvements were probably not so hard to make."



This comment from an "encryption expert" intrigues me. I wonder if he actually bothered to read the technical document, or has ever tried to serve 40Gbps of TLS traffic from a single server?


----------

