• BuzzRobot
  • Posts
  • Graph Neural Networks for Weather Forecasting by Google DeepMind: Video Recording

Graph Neural Networks for Weather Forecasting by Google DeepMind: Video Recording

Plus: Neural MMO 2.0, a massively multi-agent environment for RL research – upcoming talk. A simulation platform to study human-robot tasks by Meta – written summary of the talk

Hello, fellow human! I’m Sophia. Sharing some updates with you.Table of Contents:

  • Upcoming Talk: Neural MMO 2.0 – A Massively Multi-Agent Environment for Reinforcement Learning (RL) Research.

  • Video Recording: Talk by Ferran Alet, a Research Scientist at Google DeepMind, on Applying Graph Neural Networks for Skillful Weather Predictions.

  • Written Summary: Talk by Xavier Puig, a Research Scientist at FAIR (AI at Meta), on Habitat 3.0, a Simulation Platform for Studying Human-Robot Tasks in Home Environments.

Upcoming Talk: Neural MMO 2.0

On Thursday, March 7th, we are hosting a talk on Neural MMO 2.0 – a massively multi-agent environment for reinforcement learning (RL) research. In this environment, simulated agents compete for resources, develop skills necessary for survival, and create a competitive world. David Bloomin, an AI researcher and one of the authors of this work, will share insights about Neural MMO 2.0.

What excites me the most about this talk – David will share details about the engineering decisions he and his collaborators implemented to make this simulated environment performant.

Video Recording: Talk by Ferran Alet, a Research Scientist at Google DeepMind

The video recording of a talk by Ferran Alet, a Research Scientist at Google DeepMind, is now live on our YouTube channel. He discusses how Graph Neural Networks (GNNs) are being applied in weather forecasting.

In recent years, the deep learning approach has surpassed traditional 'numerical weather prediction' (NWP) methods in deterministic global weather forecasting and is showing promising results in probabilistic forecasting. In this talk, Ferran dives deep into his recent research work, which has shown powerful results in weather prediction and was featured in Science Magazine.

Written Summary: Talk by Xavier Puig, a Research Scientist at FAIR

We recently hosted a talk by Xavier Puig, a Research Scientist at FAIR (AI at Meta), who introduced Habitat 3.0, a simulation platform for studying human-robot tasks in home environments.

If you are interested in how reinforcement learning (RL) is applied to human-robot interaction in home environments and would like to catch up with the lecture, we’ve prepared a summary of the talk (see below).

The video recording is also available on our YouTube channel.

Summary of Xavier Puig’s talk:

Imagine a world where robots seamlessly assist with everyday tasks, navigating your home, interacting with objects, and even adapting to your individual preferences. This vision forms the core of Xavier Puig's lecture, where he dives deep into the technical challenges and promising possibilities of human-robot collaboration within indoor environments, and more specifically, Habitat 3.0.

The goal is to develop robots that can assist humans with various tasks in their homes. However, several technical hurdles stand in the way. Firstly, robots need the ability to generalize their skills across diverse tasks and environments. This requires access to large amounts of diverse interaction data, often limited in real-world testing scenarios. Secondly, training robust robot policies requires huge amounts of interaction data. However, it raises concerns about safety and efficiency when collecting such data in real-world settings. Finally, robots must not only adapt to various tasks and environments but also adjust to different human behaviors, preferences, and interaction styles.

One promising approach to address these challenges is the use of simulation technologies. By leveraging simulators, researchers can efficiently gather interaction data across diverse environments, overcoming the limitations of real-world testing. Puig's group, in particular, has focused on developing Habitat 3.0, a cutting-edge simulator designed for fast interaction and training robot policies for navigation and object manipulation in virtual environments. This platform has a physics engine for realistic robot-environment interactions, a reward system designed for task-oriented learning, and a modular architecture that allows for easy customization and integration of new tasks and environments.

While commanding robots with language to perform tasks is definitely a big step in the right direction, it lacks true human-robot interaction. The robots, as observed in demonstrations, receive instructions without actively engaging with the human. To overcome this limitation, Puig emphasizes the need for simulators equipped with human models. This is where Avatar 3.0 comes in.

Avatar 3.0 builds upon Habitat 3.0, and incorporates a number of features and techniques. Some of them are listed as follows:

  • A body model trained on real human scans that allows for diverse body shapes and genders, making interactions more realistic.

  • Motion capture data: Pre-recorded clips of human motion are used for navigation and basic interactions, while a neural network generates more complex motions based on textual descriptions.

  • Physics-based simulation: This ensures realistic interactions between robots, humans, and objects in the environment.

  • Reinforcement learning: This allows robots to learn optimal policies for interacting with humans and objects through trial and error within the simulation.

Avatar 3.0 also provides tools for real people to interact with simulated robots, bridging the gap between simulation and real-world scenarios. These tools give researchers the freedom to observe and evaluate how robots perform in response to human input and behavior, which in turn allows them to refine their policies before deploying them in real-world settings.

To measure the effectiveness of robot training within the aforementioned environments, researchers utilize two key benchmark tasks:

Social Navigation: This task assesses a robot’s ability to effectively follow a human within an environment, mimicking real-world scenarios where you might lead it to the kitchen or instruct it to follow you while you clean. Through repeated interactions in the simulator, the robot learns to navigate alongside humans, anticipating their movements and adjusting its own accordingly.

Social Rearrangement: Going deeper into human-robot collaboration, we find out how the robot interacts with humans to rearrange objects - for instance, helping one move furniture or tidying up a room. The simulator allows researchers to assess how  well the robot understands your instructions, coordinates its actions with yours, and even adapts to unforeseen situations - say, a vase dropping to the floor.

The lecture also touched upon future directions in this field. Combining language models and human behavior datasets within simulators holds promise for even more realistic human interactions.

Reply

or to participate.