Machine learning can improve results in studying subatomic particles, and Kubeflow can help run machine learning workloads.
Using machine learning can improve results in studying subatomic particles, as demonstrated by the jet energy regression example
Kubeflow can help run machine learning workloads
Challenges in implementing the demo included finding the correct version of the Triton server image and customizing TensorBoard
Possible improvements include profile replication across multiple clusters, making pipelines namespace, and adding limit range resources to profiles
The jet energy regression example showed that using deep learning can lead to a 10% improvement in energy resolution and a 3x improvement in flavor dependence, making it a success for this application.
The Large Hadron Collider is the world’s largest particle accelerator measuring 27 km in circumference. It accelerates beams of particles in opposite directions almost to the speed of light before making them collide. The particles emerging from the collisions are then measured in large detectors such as the Compact Muon Solenoid. An especially important object of study are so-called jets composed of multiple particles shooting out in the same direction from the collision point. Data-driven methods are used to correct the energy values for these jets, and what we’ll present here is the utilization of Kubeflow to enable state-of-the-art graph neural network based corrections. Kubeflow’s pipeline component allows us to define our machine learning workflow in a well-structured and reproducible manner, and its built-in training operators are used to scale up the training with ease. This work is expected to pave the way for future adoption of Kubeflow among the physics community at CERN.Click here to view captioning/translation in the MeetingPlay platform!