Autoscaling Elasticsearch for Logs on Kubernetes


Authors:   Ciprian Hacman, Radu Gheorghe


Best practices for scaling Elasticsearch clusters
  • Use metrics from inside Elasticsearch for accuracy
  • Scale in larger increments to reduce noise
  • Force index rotation to evenly spread load across nodes
  • Judge cluster size based on disk usage and search latency
  • Use local SSDs for better I/O latency
  • Consider hot-warm-cold architecture for data management
When scaling up an Elasticsearch cluster, it's important to evenly distribute the load across all nodes. This can be achieved by forcing index rotation, which creates a new index that is spread out throughout the cluster. However, when scaling down, it's important to properly drain nodes before shutting them down to avoid an imbalanced cluster. Additionally, it's important to consider search latency and disk usage when judging the size of a cluster, and to use local SSDs for better I/O latency. Finally, the hot-warm-cold architecture can be useful for managing data as it becomes less relevant over time.


Elasticsearch (and its fork, OpenSearch) is the go-to storage for logs. As with any storage, the cluster likely needs to scale to keep up with the change of load. But autoscaling Elasticsearch isn't trivial: indices and shards need to be well sized and well balanced across nodes. Otherwise the cluster will have hotspots and scaling it further will be less and less efficient. This talk focuses on two aspects: - best practices around scaling Elasticsearch for logs and other time-series data - how to apply them when deploying Elasticsearch on Kubernetes. In the process, a new (open-source) operator will be introduced (yes, there will be a demo!). This operator will autoscale Elasticsearch while keeping a good balance of load. It does so by changing the number of shards in the index template and rotating indices when the number of nodes changes.Click here to view captioning/translation in the MeetingPlay platform!


Post a comment

Related work

Authors: Rodrigo Campos Catelin, Marga Manterola

Authors: Guy Templeton, Chen Wang, Michele Orlandi, Piotr Betkier, Jayant Jain