logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Le Tran
2022-10-26

tldr - powered by Generative AI

The speaker discusses the importance of open learning tools and platforms in growing the Kubernetes community, and shares her personal experience of becoming a member of the community.
  • Growing the Kubernetes community is essential for its future success
  • Open learning tools and platforms can eliminate barriers to entry and make the community more welcoming
  • The speaker personally relied on free and beginner-friendly resources to learn about Kubernetes
  • The speaker highlights a particular feature of a free site called learnitocasm.io, which offers engaging and high-quality hands-on labs
  • The site was created for the community by the community, and continues to feature projects from the community
  • The site is being transformed into kubecon.io, and will involve learning partners to create an even bigger ecosystem of learning materials
Conference:  Transform X 2022
Authors: Susan Zhang, Faisal Siddiqi, Bryan Catanzaro, Erhan Bas, Elliot Branson
2022-10-19

Join this enterprise-focused, spirited discussion on how best to train, use, and fine-tune foundation models in the enterprise. Elliot Branson, Director of Machine Learning & Engineering, Scale AI, will moderate the panel with industry experts from AWS, NVIDIA, Netflix, and Meta.Erhan Bas, formerly Applied Scientist at Amazon Web Services and now at Scale, shares his perspective on training large language models (LLMs). Bryan Catanzaro, Vice President of Applied Deep Learning Research at NVIDIA, shares how the GPU manufacturer is targeting foundation models as a core workflow for enterprise customers. Faisal Siddiqi, Director of Machine Learning Platform at Netflix, will share how his company is using foundation models to analyze highly produced video content. Susan Zhang, Researcher at Facebook AI Research (FAIR), a division of Meta, will share insights from training and fine-tuning Meta’s OPT model.Members of the panel will share how they scale their training across multiple nodes, attempt to avoid overfitting by mitigating data quality issues early on, and address bias in models trained on a large internet-based text corpus. The panelists will discuss the compute cost inherent in training an LLM from scratch, how to avoid costly and tedious hyperparameter optimization, the need to mitigate training failure risk in clusters with thousands of GPUs, including sticking to synchronous gradient descent, and the need for extremely fast storage devices to save and load training checkpoints.
Conference:  Transform X 2022
Authors: Laura Major
2022-10-19

tldr - powered by Generative AI

Motional's approach to developing autonomous vehicles involves continuous learning and data sharing across the industry.
  • Motional uses a continuous learning framework to mine on-road driving data and discover rare scenarios or areas where there are challenges or issues with their performance.
  • They up sample and incorporate more of these scenarios into their training data to improve their autonomy performance.
  • Motional recognizes the need for richer development of data sets and sharing of those data sets to fuel the development across the industry.
  • They have pioneered a data sharing culture that has now extended across the industry.
  • Motional's approach involves not just increasing the volume of data, but getting the right data, including finding rare objects and identifying challenging scenarios.
  • Their focus is on improving their autonomy performance to achieve true driverless capability.
Authors: Christian Heckelmann
2021-10-13

tldr - powered by Generative AI

Best practices for running workloads in Kubernetes
  • Proper validation and policies should be implemented to ensure security and stability
  • Developers should be familiar with local development tools and avoid using 'latest' tags for images
  • Private registries and base images should be used for better control and security