The talk proposes a novel approach to build AIoT applications using Kubernetes and ML Ops to bring machine intelligence closer to the edge. The emergent AIoT behaviors and architecture patterns are explained, along with a comprehensive reference architecture and a live demo.
- Bringing machine intelligence closer to the edge offers significant advantages for IIoT and IoMT solutions
- The computational complexity of running machine learning on embedded or resource-constrained devices is a big challenge
- Optimization strategies, such as using Kubernetes to control and manage AIoT machine learning pipelines on the edge devices, can help solve these challenges
- The reference architecture for running AIoT ML Ops on the edge tier has four hardware tiers: training, platform, inference, and IoT
- The platform tier hosts various services, including a private container registry and an OTA ML code repo
- The inference and IoT tier uses MQTT and Kafka-based services for protocol bridging and communication
- A live demo simulating an industrial IoT setting shows how ML pipelines measure drift and re-train and re-deploy the model
The speaker spent many weeks building the hardware for the live demo, which involved containerizing workloads and using hardware accelerator aware pod placement strategies. The lesson learned from this journey is what shows up in the reference architecture.