logo

Crack the FaaS Cold Start and Scalability Bottleneck

2022-05-18

Authors:   Rui Zang, Cathy Zhang


Summary

The presentation discusses an enhanced snapshot-based approach to address the challenges of creating new function instances and supporting fast auto-scaling in response to burst traffic. The approach involves breaking the original function code image into essential and non-essential code blocks, regenerating a new set of unique data associated with a specific micro VM instance, and adjusting existing running micro VMs resource boundary to create more function containers.
  • The enhanced snapshot-based approach addresses the challenges of creating new function instances and supporting fast auto-scaling in response to burst traffic.
  • The approach involves breaking the original function code image into essential and non-essential code blocks.
  • A small program is developed to regenerate a new set of unique data associated with a specific micro VM instance.
  • Existing running micro VMs resource boundary is adjusted to create more function containers.
  • The approach reduces code start latency and saves time.
  • The snapshot file needs to be downloaded before starting the container.
  • The essential code blocks are smaller and only a small portion of the data are actually used during the function test run.
  • The snapshot file and essential code blocks combined are much smaller than the original image size.
  • This results in a shorter downloading time and a shorter code start time.
The presentation compares the existing way of creating a new function instance with the snapshot-based way. The timing measured for both ways of creating new instances is right after the runtime share is created and before executing the function code. The existing way costs about 1700 milliseconds, while the snapshot-based way costs about 630 milliseconds, saving about 60% of the time. The essential code blocks are smaller, which means that only a small portion of the data are actually used during the function test run. The snapshot file and essential code blocks combined are much smaller than the original image size, resulting in a shorter downloading time and a shorter code start time.

Abstract

FaaS provides many benefits to the end-users, such as zero maintenance and on-demand auto-scaling. As each new technology brings benefits, it brings challenges. There are two major challenges: cold start latency and autoscaling speed in response to bursty traffic. Cold start latency refers to the time it takes to create a new function instance and get it ready to start execution. Autoscaling refers to the operation of automatically adjusting the number of running function instances to meet the traffic demand. This talk provides a detailed analysis of what causes the cold start latency and the autoscaling bottleneck. It then presents a new approach that reduces the cold start latency through instantiating a new function instance from a combination of its memory snapshot and its essential code chunks. The authors will share their learnings and test results. On the autoscaling part, the authors will share their insight of using an elastic function sandbox to boost the auto-scaling speed.Click here to view captioning/translation in the MeetingPlay platform!

Materials:

Post a comment