logo

Orchestrating Interconnected Apps Across Geographically Distributed Kubernetes Clusters

2022-10-27

Authors:   John Belamaric


Summary

The presentation discusses the use of configuration as data in managing Edge applications and the creation of the Nephele project to implement this vision.
  • Templating systems intermingle code and configuration, making it difficult to scale and create reusable pieces of code
  • Configuration as data adds structure to configurations, enabling highly reusable tools
  • The Nephele project aims to implement this vision by building an orchestration cluster with APIs that sit on top of the storage layer to manage configurations
  • The project uses Kubernetes controllers to operate on top of configurations, which are packaged in KRM and can be cloned and modified with automation and human inputs
  • The tracking of upstream and cloning of packages enables automation of day two operations and the creation of individual automations that understand narrow tasks
  • The use of configuration as data and the Nephele project can enable highly scalable and efficient management of Edge applications
The speaker gives an example of a resource that needs an IP address and how a controller can pick up that configuration and allocate an IP address out of a central IP address management system, injecting it into the configuration and doing this 10,000 times across 10,000 clusters. This kind of automation can be used across every single workload deployed across an entire organization, making it highly efficient and scalable.

Abstract

Imagine deploying a set of complex, interconnected workloads across a fleet of geographically distributed Kubernetes clusters. How do you decide where to run each workload? How do we specialize the configs for each site? How do we make sure those configs conform to our policies? How do we deliver the configs to the right clusters, and make sure they don't drift? What happens when we add a site - how do we know which interconnected workloads need to be reconfigured? How do we know what to change in each of those workloads? Do we just need to change Kubernetes manifests, or do the configuration files of the workloads themselves need to be changed? How do we do that? Can we really automate all this? Linux Foundation’s Nephio project (https://nephio.org) uses Kubernetes-based automation to solve these problems with an extensible platform for large scale, multi-site workload orchestration and configuration management. Come learn how we’re doing it!

Materials:

Post a comment

Related work

Authors: Arun M. Krishnakumar, Sahithi Ayloo
2023-04-19




Authors: Wilfred Spiegelenburg, Peter Bacsko
2023-04-20