• BuzzRobot
  • Posts
  • AI to Automatically Fix Bugs in Code by Google: Virtual Talk

AI to Automatically Fix Bugs in Code by Google: Virtual Talk

Plus: Video lecture on best strategies for distributed training of 175B and 1T size LLMs

Hi, fellow human! This is Sophia. As usual, I wanted to give you a heads up on our upcoming virtual talks. I hope you'll be able to attend one of them – it would be great to meet you.

Table of Contents:

  • Upcoming virtual talks (April 11th): AI for automated vulnerability fixes, presented by an ML engineer at Google.

  • Video recording: Best strategies for distributed training of LLMs by a research scientist from Oak Ridge National Laboratory.

Virtual talk: AI for automated vulnerability fixes

Our guest speaker, Jan Nowakowski, a Machine Learning Software Engineer at Google, will discuss the lessons learned from his experience in using AI to scale bug-fixing capabilities in C/C++, Java, and Go codebases at Google. 

This lecture will highlight some of the findings of the Google team in this area, as well as the potential of automatic vulnerability fixes.

We are hosting the talk virtually on April 11th at 10 am PT – wherever you are on this planet, you could make it. And if for some reason you can't attend the talk, I'll share a video recording in a couple of weeks.

If you have some time this Thursday, please join the talk about AI that can provide forecasts of future events, presented by the UC Berkeley research team. Check it out here.

Video recording: Best strategies for distributed training of LLMs

Our guest, Sajal Dash, a Research Scientist at Oak Ridge National Laboratory, shared with the BuzzRobot community the strategies he identified for the distributed training of large language models (LLMs) of 175B and 1T size.

In his talk, he delves into details on how techniques like data and model parallelism (including tensor, pipeline, sharded, and hybrid approaches) could be implemented and used together to achieve the models' best performance.

The lecturer also provides the optimal parameters for training 175B and 1T size models. The strategies were optimized for Frontier, the supercomputer developed by Oak Ridge National Laboratory, which is based on AMD GPU architecture.

Reply

or to participate.