logo

Using Kubernetes with Data Processing Units to Offload Infrastructure

2021-10-15

Authors:   Tom Golway, Thomas Phelan


Summary

Using Data Processing Units (DPUs) with Kubernetes to offload software infrastructure
  • Enterprises are shifting their business innovation strategies to embrace the concept of being digitally aware
  • Application architectures are shifting towards a more disaggregated model that offers greater agility, supports elasticity, and provides greater control for software quality assurance
  • DPUs can be used to offload core Kubernetes software infrastructure components from the main CPU onto the processing units
  • DPUs can also offload network packet tracing functionality and service mesh components
  • Cloud-native infrastructure is required to support the optimal placement of workload, ensure performance, security, manageability, and accessibility to data
  • DPUs can help enable greater optimization of cloud-native application architecture while ensuring the usage of CPU cores and memory is maximized to support applications
As enterprises innovate on the business side, data and application architectures are shifting towards a more disaggregated model offering greater agility and supporting elasticity. However, this brings new challenges, particularly in the areas of security, orchestration, and meeting service levels. Application topologies and flows will have greater complexity, which brings a greater potential for increased latency, jitter, and traffic flow. Small degradations in performance may seem inconsequential for an individual microservice, but when aggregated to the application level, it will impact business process service levels. This requires a cloud-native infrastructure that supports the optimal placement of workload, ensures performance, security, manageability, and accessibility to data. DPUs can help enable greater optimization of cloud-native application architecture while ensuring the usage of CPU cores and memory is maximized to support applications.

Abstract

Application architectures are shifting toward a more disaggregated model that offers greater agility, supports elasticity, and provides greater control for software quality assurance. This has led to an increase in complexity for application topologies, flows and security.In this session, we will describe some novel work related to offloading core Kubernetes software infrastructure components from the main CPU onto the processing units of DPUs (data processing units). We will show a vendor-neutral way to not only offload the implementation of a Kubernetes CNI (container network interface) plugin, but also offload network packet tracing functionality, such as jaeger, and service mesh components, such as envoy.In this session, you will learn about CNIs, SmartNICs, Arm CPUs, and how to run software somewhere other than the main CPU. You will also learn why this is becoming increasingly important in the quickly evolving world of DPUs.

Materials:

Post a comment

Related work