logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Dawn Chen, Sergey Kanzhelev, Mrunal Patel, Derek Carr
2023-04-21

There are many things happening in SIG Node. New and exciting features are coming up and existing features are graduating. Come to our maintainers track session to catch up with everything happening in SIG Node. SIG Node owns components that control interactions between pods and host resources, including the Kubelet, Container Runtime Interface (CRI), and Node API. SIG Node is responsible for the Pod’s lifecycle from allocation to teardown, to liveness checks and shared resource management. We work with various container runtimes, kernels, networking, storage, and more; anything a pod touches is SIG Node’s responsibility! We will talk about sidecar containers, kubelet resource management improvements, and many more current topics. We also will reflect on changes we made in SIG Node leadership and our efforts to increase participation in SIG Node activities.
Authors: Alexander Kanevskiy, Swati Sehgal, David Porter, Sascha Grunert, Evan Lezar
2023-04-19

tldr - powered by Generative AI

The presentation discusses the importance of resource management in Kubernetes and highlights new features and enhancements in the ecosystem, such as the Container Device Interface (CDI) and Cgroups V2.
  • The CDI allows for sharing of GPUs and devices across different containers and pods, as well as dynamic partitioning and mixing and matching of devices.
  • Topology-aware scheduling is not the only use case for Node Resource Information (NRI) plugins, and top-level attributes can be used for other capabilities as well.
  • Cgroups V2 provides new resource management capabilities, such as memory QoS and PSI metrics, and there are plans to explore i/o isolation and network QoS guarantees.
  • The speaker encourages feedback from the audience on resource management challenges and desired features.
Authors: Justin Santa Barbara, Ciprian Hacman
2023-04-19

tldr - powered by Generative AI

Breaking up the Kubernetes monorepo enabled the project to support a larger ecosystem while maintaining reliability and ease of use.
  • AWS and GCP tests were kicked off on every single PR to ensure cloud provider functionality and Kubernetes functionality were not broken
  • Technical issues and people issues led to the decision to break up the monorepo
  • Breaking up the monorepo allowed for architectural improvements and a larger ecosystem
  • The burden of testing falls on the component repo, but there is an expectation that the code has worked at some stage
  • The goal is to achieve reliability and ease of use while supporting a larger ecosystem