The speaker discusses the advancements in machine learning and AI over the past six months, including self-supervised models and pre-trained models, and their potential implications for various domains.
- Self-supervised models are becoming better at performing tasks and admitting a broader range of tasks as they scale up
- Pre-trained models can be used as software engineering objects to solve a variety of problems
- Self-supervised learning techniques are being applied to various domains, including vision and graphs
- The use of these models has potential implications for fields such as molecular dynamics and protein folding
Researchers at MSR have developed a self-supervised learning technique called graph formers, which can be used to solve molecular dynamics problems and protein folding. This is an exciting development in the field of AI and has potential implications for drug discovery and other areas of molecular biology.