logo

Edge-Native: The New Paradigm For Operating And Developing Edge Apps

2022-10-27

Authors:   Frank Brockners, Krisztián Flautner


Summary

Edge native enables a cloud-like experience at the edge and allows for a smooth transition as the pendulum swings back towards the edge from the cloud. The main trend is to combine different verticals into a platform with horizontal solutions that can work across verticals.
  • Edge native allows for a cloud-like experience at the edge and a smooth transition from the cloud to the edge
  • Federated learning is a technique used for machine learning at the edge that preserves privacy and allows for local learning
  • Event-driven display can be used for digital signage, video surveillance, and other applications
  • Great Bear is a foundational infrastructure that can be used to build a scalable and robust system
  • Machine learning workloads may need to be adjusted for the edge, such as shrinking down models or using Federated learning
  • Aji is an example of an AI model that can be deployed at the edge, but may need to be adjusted for device size and speed
Federated learning is a technique used to train a single model globally without having global access to the data. It works by training three different models that preserve privacy and then aggregating them back to the global model. This technique is useful in privacy-sensitive areas such as healthcare where people are trying to extract information without giving direct access to the data. Edge native allows for a cloud-like experience at the edge and a smooth transition from the cloud to the edge. Great Bear is a foundational infrastructure that can be used to build a scalable and robust system, such as event-driven display for digital signage and video surveillance. Machine learning workloads may need to be adjusted for the edge, such as shrinking down models or using Federated learning. Aji is an example of an AI model that can be deployed at the edge, but may need to be adjusted for device size and speed.

Abstract

“Cloud native?” Check! Apply the same principles at the Edge? Hmmm! How do I operate Apps across 1000s of locations, which are often hidden behind layers of NAT? How do I run AI apps on nodes that are too small to fit the AI model? How to make it operationally simple? Lets discuss and demo! We’re all familiar with “cloud native” -but once we start to operate applications at the edge, we have to adopt a new set of principles and evolve our cloud-native paradigms. We deploy Apps at the edge to achieve lower latency or higher performance, to comply with data sovereignty regulations, to reduce transit cost or to perform near real-time decision making on local data sources. Developing and operating Edge apps requires us to answer questions like: How do I operate Apps across 1000s of locations, which are often hidden behind layers of NAT and have spotty cloud connectivity? How do I run computation heavy tasks, like AI apps, on a set of nodes where each node does not have sufficient CPU and memory to run the entire model? How do I deal with a heterogeneous environment, with x86 and ARM-based devices? Which additional tools do I need to assure compliance to data-privacy rules, run AI models that just don’t fit a single compute element, or perform federated learning in an efficient way?

Materials: