Sizing a Kubernetes deployment can be tricky. How many pods should it have? How much CPU/memory is needed per pod? Is it better to use a small number of large pods or a large number of small pods? What’s the best way to ensure stable performance when the load on the application changes over time? Luckily for anyone asking these questions, Kubernetes provides rich, flexible options for autoscaling deployments. This session cover the following topics: - Factors to consider when sizing your Kubernetes application - Horizontal vs Vertical autoscaling - How, when, and why to use the Kubernetes custom metrics API - Practical demo: Autoscaling with application metrics from Prometheus, Linkerd, Pixie (request throughput/latency, number of shoes purchased in my web store) - Impractical demo: A Turing-complete autoscaler!Click here to view captioning/translation in the MeetingPlay platform!