The presentation discusses the development of Parka, a tool for profiling and analyzing performance in software applications. The focus is on the storage architecture and the process of handling write requests.
- Parka is a tool for profiling and analyzing performance in software applications
- The storage architecture of Parka is designed to handle stack traces as a first-class citizen
- Write requests are ingested and validated using protobuf and metadata label sets
- The metadata store is implemented in SQLite and can be used with any SQL database
- The end result of a write request is a set of location IDs and corresponding sample values
The speaker demonstrates how Parka can compare profiles and show which functions are allocating more bytes. They also show how the CPU samples from the agent can be used to analyze the time spent by the Parka agent itself. The speaker explains how Parka's storage architecture was developed from scratch to handle stack traces and improve computation and throughput. They also discuss the process of handling write requests and how metadata is ingested and stored in the metadata store.
For years Google has consistently been able to cut down multiple percentage points in their fleet-wide resource usage every quarter, using techniques described in their “Google-Wide Profiling” paper. Ad-hoc profiling has long been part of the developer’s toolbox to analyze CPU and memory usage of a running process, however, through continuous profiling, the systematic collection of profiles, entirely new workflows suddenly become possible. Matthias and Kemal will start this talk with an introduction to profiling with Go and demonstrate via Conprof - an open-source continuous profiling project - how continuous profiling allows for an unprecedented fleet-wide understanding of code at runtime. Attendees will learn how to continuously profile Go code to help guide building robust, reliable, and performant software and reduce cloud spend systematically.https://www.parca.dev/