Traditional host Network performance measurements are based on analyzing the correlation between the two classical metrics of throughput and latency against cpu utilization. This approach leaves out an important metric: Power utilization. The huge influx of AI infrastructure - which ends up consuming network resources - has brought much needed attention to power consumption as a variable in network infrastructure. If you consider that the operational cost of power is going up (significantly in some parts of the world with the ongoing crisis) we cannot ignore networking infrastructure contributions to not just the cost factor but also the environmental harm brought forth with high power use.
Nabil Bitar, Jamal Hadi Salim and Pedro Tammela feel that Linux, which dominates data center deployments, deserves special attention. There is not much literature or shared experience in this space so they hope to inspire a discussion by sharing their experiences with the community: How would one go about measuring power utilization for network workloads? How do we correlate metrics such as perf, throughput, etc to the power utilized? How would one go about saving power while still achieving an application's stated end goals?
If you would like to share your experiences or solutions on this important topic please reach out to me. The agenda is still open.
cheers, jamal