We are pleased to announce that Cumulus is back as a gold sponsor for
netdevconf 0x14! Cumulus has sponsored _every_ single netdev
conference since inception. This conference continues to thrive
because of such generosity. Thank you Cumulus!
More info about Cumulus:
https://netdevconf.info/0x14/news.html?gold-sponsor-cumulus-networks
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
Hello everyone,
I have 2 guest room on hold for at the Pinnacle Harbourfront Hotel where our conference is taking place.
I will make these a first come first serve the rate is $259CAD per night. For anyone who is interested please send me an email.
Best regards,
Christie Geldart
Event Coordinator, NETDEV 0x14
https://netdevconf.info/0x14/
If you are a big shop like Google you are always looking to squeeze
all the performance juice you can out of your CPUs. Given the large
number of machines deployed, a 1% performance improvement could mean
millions of $$ in savings in both capital and operational costs.
In this talk, monsieur Eric Dumazet - the man sometimes refered to in
passing as some form of ninja - will describe changes introduced by
Google that help double TCP receiver efficiency in their datacentres.
Eric will discuss various issues that had to be overcome and a
variety of tuning that was needed to achieve these performance
improvements.
More info:
https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-r…
cheers,
jamal
Imagine you have only one more bit left in the world! Scratch
that. Imagine there is only a half bit left in the world.
Then imagine there are multiple causes vying for this half bit.
Who would you give it to and why?
Yes, these things happen;-> A lot easier when it is between developers
and a lot harder when it is to decide a standard that shall be cast
in stone. The Internet Engineering Task Force (ietf) Transport Area
Working Group (tsvwg) has such a struggle going.
There are presently 2 proposals before for the last unused code point
in the IPv4/6 header - one by the Low Latency Low Loss Scalable
throughput (L4S) folks (see Netdev 0x13 for L4S talks) and another
from the Some Congestion Experienced (SCE) folks.
In this talk, Rodney Grimes will describe the two solutions and their
prototype kernel implementations; what changes are made to both TCP and
IP layers, etc.
Rod is then going to pose the $1M question and solicit feedback:
is one approach better than the other or do we need a new outlook?
More info:
https://netdevconf.info/0x14/session.html?talk-what-to-do-with-half-a-bit-L…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
Being able to marry new Linux technology with existing "legacy"
infra is valuable to allow for continuity. XDP is a relatively
new technology introduced in Linux networking which enables
more efficient packet processing. A common desire is to allow
"legacy" packet processing functions to take advantage of the
speed advantages offered by XDP.
In addition to the efficiency factor, XDP also provides a
time-to-market advantage: new network packet processing features
can be added without need to go through the complexity of the
upstreaming process.
Toshiaki Makita and William Tu are still in pursuit of getting
openvswitch to use XDP. In this talk Toshiaki and William attempt
to marry OVS to XDP via:
1. Attaching XDP from kernel through UMH (user mode helper),
2. Reusing TC HW acceleration interface
3. Attaching XDP from OVS userspace daemon
The authors will describe their approaches and challenges faced.
More info:
https://netdevconf.info/0x14/session.html?talk-XDP-in-open-vswitch
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
The Multipath TCP community started its engagement with netdev conf
0.1 (https://netdevconf.info/0.1/sessions/16.html) and have shown
up to multiple other netdev confs after to solicit for feedback and
report progress. After all that hard work MPTCP has finally been merged!
In this talk Mat Martineau, Ossama Othman and Peter Krystad
will provide advice to developers, ops and distros how to make
good use of the upstreamed MPTCP.
They will discuss kernel considerations, report on upstream merge
progress to date, and introduce the Multipath TCP Daemon(mptcpd) to
the Linux networking community.
More info:
https://netdevconf.info/0x14/session.html?talk-using-upstream-MPTCP-in-linu…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
Kubernetes(k8s) Container Network Interface (CNI) is a specification for
managing network resources on a kubernetes cluster. CNI enables
plugin-based networking solution for containers ranging in functionality
from IP address management to access control policy management, QOS,
etc. Operators can pick and choose from the many packaged and open
CNI implementations that exist or could create custom CNIs to serve
their needs.
In this talk Rony Efraim and Liel Shoshan describe how they approach
hardware network offload enabling in k8s.
Rony and Liel will illustrate offloading with a bunch of CNIs that
can be used in conjunction with OVS. In addition, they will also
describe how to offload for other K8s use cases like pod-to-pod
intra networking and enhancing ingress service load balancing via
dp_hash. And last but not least the speakers will describe
challenges faced and future work.
More info:
https://netdevconf.info/0x14/session.html?talk-hardware-offload-for-k8s-con…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
As the commercials say - 5G(Fifth Generation) networking is here.
Why is that a big deal?
One appealing aspect: How do you like getting multi-gigabit
bandwidth to your mobile device(s)?
How does that work?
For the first time in commercial mobile networking history
a vast untapped spectrum has been made available for mass user
consumption in the millimeter wave length range; in the 5G world
it is refered to as "mmWave".
The spectrum in mmWave links makes possible the multi-Gbps data rates
on the 5G cellular networks.
The problem with mmWave links is they are, like most high-frequency
signals, susceptible to blockage. Basic obstructions like trees, snow,
rain, buildings, etc interfere with the signal. A technique called
beamforming helps but doesnt solve the problem entirely.
To put this in context:
Think of having a link that is very high speed but is constantly
fluctuating in capacity. Then the question is: "How does TCP congestion
control work in this kind of setup?"
In this talk Feng Li, Jae Won Chung and Jamal Hadi Salim[1]
will present results of a study to evaluate how various Linux TCP
congestion control algorithm implementation fare over mmWave links.
The authors claim this is _the first ever_ such study on a real
commercial 5G network! The study was carried on the Verizon 5G
deployment network.
Feng et al will present data on comparing popular TCP congestion
control algorithms, including NewReno, Cubic, BBR and BBRv2
(prepatch) etc. The results show that the performance of TCP on
mmWave links is still highly dependent on the combination of TCP
algorithm and socket buffer sizes.
Without a doubt mmWave links impose new challenges on future transport
layer design and the authors hope this talk will incentivize
more discussions in the community.
[1] Refered to in the third person.
More info:
https://netdevconf.info/0x14/session.html?talk-preliminary-evaluation-of-TC…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
It is turtles^wbatching all the way down folks!
Several parts of the kernel network stack use batching
for performance enhancement.
Maciej Fijałkowski, Bjorn Topel and Krzysztof Kazimierczak
feel we could do a little more. They borrow ideas from other high
performance packet processing solutions, such as DPDK,
and tweak those ideas to an "XDP first" design - meaning drivers
that are optimized for the case when all packets are processed by XDP.
In this talk, they describe how they enabled batching at both the driver
and XDP level to improve performance for two sample drivers Intel i40e
and ice.
Maciej et all will share the performance improvements gained and
propose some ideas how these batching ideas can equally be applied
at other BPF hooks (socket send/recv, traffic control, etc)
More info:
https://netdevconf.info/0x14-staging/session.html?talk-it-is-batching-all-t…
cheers,
jamal
Existing performance enhancing mechanisms such as TCP auto-tuning
or programmatic async(epoll, etc) events such as "ready to send"
help applications to sustain high throughput even under high
Bandwith-Delay product scenarios.
But: the folks at the Tor project have found that when you have
thousands of active TCP sockets transmitting high volumes of data,
many of which are simultenous (as it is the case in Tor anonymity
network) then these mechanisms are insufficient.
Local buffer bloat becomes a hindrance.
In this talk, David Goulet and Rob Jansen introduce a new
async event that helps applications overcome these issues.
The new event supplements and extends the current write
"ready to send" event that triggers when a socket buffer
has free space.
David and Rob will present data and go into more detailed
description of the problem and then show the effect that
such a change could have on performance through a small
scale simulation.
More info:
https://netdevconf.info/0x14-staging/session.html?talk-reducing-kernel-queu…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal