Imagine you have only one more bit left in the world! Scratch
that. Imagine there is only a half bit left in the world.
Then imagine there are multiple causes vying for this half bit.
Who would you give it to and why?
Yes, these things happen;-> A lot easier when it is between developers
and a lot harder when it is to decide a standard that shall be cast
in stone. The Internet Engineering Task Force (ietf) Transport Area
Working Group (tsvwg) has such a struggle going.
There are presently 2 proposals before for the last unused code point
in the IPv4/6 header - one by the Low Latency Low Loss Scalable
throughput (L4S) folks (see Netdev 0x13 for L4S talks) and another
from the Some Congestion Experienced (SCE) folks.
In this talk, Rodney Grimes will describe the two solutions and their
prototype kernel implementations; what changes are made to both TCP and
IP layers, etc.
Rod is then going to pose the $1M question and solicit feedback:
is one approach better than the other or do we need a new outlook?
More info:
https://netdevconf.info/0x14/session.html?talk-what-to-do-with-half-a-bit-L…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
Being able to marry new Linux technology with existing "legacy"
infra is valuable to allow for continuity. XDP is a relatively
new technology introduced in Linux networking which enables
more efficient packet processing. A common desire is to allow
"legacy" packet processing functions to take advantage of the
speed advantages offered by XDP.
In addition to the efficiency factor, XDP also provides a
time-to-market advantage: new network packet processing features
can be added without need to go through the complexity of the
upstreaming process.
Toshiaki Makita and William Tu are still in pursuit of getting
openvswitch to use XDP. In this talk Toshiaki and William attempt
to marry OVS to XDP via:
1. Attaching XDP from kernel through UMH (user mode helper),
2. Reusing TC HW acceleration interface
3. Attaching XDP from OVS userspace daemon
The authors will describe their approaches and challenges faced.
More info:
https://netdevconf.info/0x14/session.html?talk-XDP-in-open-vswitch
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
The Multipath TCP community started its engagement with netdev conf
0.1 (https://netdevconf.info/0.1/sessions/16.html) and have shown
up to multiple other netdev confs after to solicit for feedback and
report progress. After all that hard work MPTCP has finally been merged!
In this talk Mat Martineau, Ossama Othman and Peter Krystad
will provide advice to developers, ops and distros how to make
good use of the upstreamed MPTCP.
They will discuss kernel considerations, report on upstream merge
progress to date, and introduce the Multipath TCP Daemon(mptcpd) to
the Linux networking community.
More info:
https://netdevconf.info/0x14/session.html?talk-using-upstream-MPTCP-in-linu…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
Kubernetes(k8s) Container Network Interface (CNI) is a specification for
managing network resources on a kubernetes cluster. CNI enables
plugin-based networking solution for containers ranging in functionality
from IP address management to access control policy management, QOS,
etc. Operators can pick and choose from the many packaged and open
CNI implementations that exist or could create custom CNIs to serve
their needs.
In this talk Rony Efraim and Liel Shoshan describe how they approach
hardware network offload enabling in k8s.
Rony and Liel will illustrate offloading with a bunch of CNIs that
can be used in conjunction with OVS. In addition, they will also
describe how to offload for other K8s use cases like pod-to-pod
intra networking and enhancing ingress service load balancing via
dp_hash. And last but not least the speakers will describe
challenges faced and future work.
More info:
https://netdevconf.info/0x14/session.html?talk-hardware-offload-for-k8s-con…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
As the commercials say - 5G(Fifth Generation) networking is here.
Why is that a big deal?
One appealing aspect: How do you like getting multi-gigabit
bandwidth to your mobile device(s)?
How does that work?
For the first time in commercial mobile networking history
a vast untapped spectrum has been made available for mass user
consumption in the millimeter wave length range; in the 5G world
it is refered to as "mmWave".
The spectrum in mmWave links makes possible the multi-Gbps data rates
on the 5G cellular networks.
The problem with mmWave links is they are, like most high-frequency
signals, susceptible to blockage. Basic obstructions like trees, snow,
rain, buildings, etc interfere with the signal. A technique called
beamforming helps but doesnt solve the problem entirely.
To put this in context:
Think of having a link that is very high speed but is constantly
fluctuating in capacity. Then the question is: "How does TCP congestion
control work in this kind of setup?"
In this talk Feng Li, Jae Won Chung and Jamal Hadi Salim[1]
will present results of a study to evaluate how various Linux TCP
congestion control algorithm implementation fare over mmWave links.
The authors claim this is _the first ever_ such study on a real
commercial 5G network! The study was carried on the Verizon 5G
deployment network.
Feng et al will present data on comparing popular TCP congestion
control algorithms, including NewReno, Cubic, BBR and BBRv2
(prepatch) etc. The results show that the performance of TCP on
mmWave links is still highly dependent on the combination of TCP
algorithm and socket buffer sizes.
Without a doubt mmWave links impose new challenges on future transport
layer design and the authors hope this talk will incentivize
more discussions in the community.
[1] Refered to in the third person.
More info:
https://netdevconf.info/0x14/session.html?talk-preliminary-evaluation-of-TC…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
It is turtles^wbatching all the way down folks!
Several parts of the kernel network stack use batching
for performance enhancement.
Maciej Fijałkowski, Bjorn Topel and Krzysztof Kazimierczak
feel we could do a little more. They borrow ideas from other high
performance packet processing solutions, such as DPDK,
and tweak those ideas to an "XDP first" design - meaning drivers
that are optimized for the case when all packets are processed by XDP.
In this talk, they describe how they enabled batching at both the driver
and XDP level to improve performance for two sample drivers Intel i40e
and ice.
Maciej et all will share the performance improvements gained and
propose some ideas how these batching ideas can equally be applied
at other BPF hooks (socket send/recv, traffic control, etc)
More info:
https://netdevconf.info/0x14-staging/session.html?talk-it-is-batching-all-t…
cheers,
jamal
Existing performance enhancing mechanisms such as TCP auto-tuning
or programmatic async(epoll, etc) events such as "ready to send"
help applications to sustain high throughput even under high
Bandwith-Delay product scenarios.
But: the folks at the Tor project have found that when you have
thousands of active TCP sockets transmitting high volumes of data,
many of which are simultenous (as it is the case in Tor anonymity
network) then these mechanisms are insufficient.
Local buffer bloat becomes a hindrance.
In this talk, David Goulet and Rob Jansen introduce a new
async event that helps applications overcome these issues.
The new event supplements and extends the current write
"ready to send" event that triggers when a socket buffer
has free space.
David and Rob will present data and go into more detailed
description of the problem and then show the effect that
such a change could have on performance through a small
scale simulation.
More info:
https://netdevconf.info/0x14-staging/session.html?talk-reducing-kernel-queu…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
There are times when you want to recreate a network test
that would typically take a while to complete. In addition,
when you run the tests, you want the results to be consistent
in different runs.
In this talk, Johannes Berg and Richard Weinberger describe
their solution to this requirement. They introduce a
mechanism to do "time travel" in User Mode Linux(UML) with a
virtual clock.
The time travel mode allows for reproducible testing at significantly
faster test execution times; as an example, on a relatively slow
laptop:
two simulated machines in such a setup can simulate 61 pings,
at a default 1 second interval, in about 1.6 seconds real time
(as opposed to 61 seconds).
More info:
https://netdevconf.info/0x14/session.html?talk-time-travel-linux-network-si…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
The iNet Wireless Daemon aka iwd was created to be an
alternative to wpa_supplicant. Iwd provides a more complete
solution set for WiFi compared to wpa_supplicant - with a
much longwe list of features.
Marcel Holtmann will provide insights into iwd architecture and
how it takes advantage of kernel features to provide good user
experience. In addition, Marcel will go over lessons learned
since inception of iwd in the last 5 years.
More info:
https://netdevconf.info/0x14/session.html?talk-5-years-of-iwd-lessons-learn…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal
New virtualization deployments call out for high density virtual
functions with more frequent lifetime recycling. Unfortunately,
PCIe SR-IOV has limited function count and large instantiation
overhead. A solution to these challenges is to take a PCIE device
and "split" it into multiple subdevices. Each subdevice gets its
own virtual port(s), queues as well as named resources; Combined
with TC and switchdev offloads this approach overcomes the SR-IOV
limitations.
In this talk Parav Pandit introduces devlink enhancements
to manage such sub functions.
Parav first discusses how devlink is used to life cycle,
configure and deploy accelerated sub functions with eswitch offloads
support. He then discusses the plumbing done using
virtbus to achieve persistence naming of netdevices and
rdma devices. Parav will also cover how such model
addresses smartnic use case where a sub-function NIC is
hot plugged in host system in a secure manner.
More info:
https://netdevconf.info/0x14/session.html?talk-devlink-enhancements-for-sub…
Reminder, registration is now open and early bird is still in effect.
https://netdevconf.info/0x14/registration.html
cheers,
jamal