Cada is a better way to scale container applications in Kubernetes clusters by scaling based on events that cause consumption in CPU and memory, rather than reactive scaling based on CPU and memory usage.
- Cada integrates with over 55 different event sources, including Prometheus, RabbitMQ, Kafka, AWS SQS, Azure Event Hubs, GCP Pub/Sub, and Postgres.
- Cada allows for smarter scaling by considering the events that cause consumption in CPU and memory.
- Cada seamlessly integrates into any architecture in any Kubernetes cluster.
- Cada is a more efficient way to scale container applications than reactive scaling based on CPU and memory usage.
The speaker used the analogy of providing pizza for a party to illustrate the difference between reactive scaling and event-based scaling. Reactive scaling is like showing up with one pizza and waiting for it to run out before getting more, while event-based scaling is like finding out how many people are coming and bringing enough pizza to feed them all.
Event driven architectures are exploding in popularity, often coupled with the desire to make them real time. These applications enable us to design and develop scalable, distributed, and flexible systems. Kubernetes brings flexibility and a distributed platform, but it doesn't provide any built-in way to deal with event-driven scaling properly and in real time. KEDA is one of the fastest growing CNCF projects that solved these needs. Scaling based on CPU and/or memory usage doesn’t fit well with event-driven processes. Current autoscaling solutions are usually complex, and their scope is too attached to a specific provider. KEDA provides a simple way to gather the metrics from external sources (such as queues, streams, databases) and translates them into Kubernetes metrics to drive event-driven autoscaling. During this session, two of the current KEDA maintainers and creators will introduce KEDA: what it is, how it works (with demos), and discuss future development plans.