• BuzzRobot
  • Posts
  • Deployable Robotic Learning Systems by Stanford AI Lab: Virtual Talk

Deployable Robotic Learning Systems by Stanford AI Lab: Virtual Talk

Plus: Path to AGI and Reinforcement Learning environment for large RL agent training – video recording.

Hello, fellow human! This is Sophia with some updates on the AI research talks we are hosting soon.

Table of Contents

Upcoming virtual talk (March 28th): ‘Towards Deployable Robot Learning Systems’ presented by Zipeng Fu from Stanford AI Lab.

Video Recording: ‘Path to AGI: Massively Multi-Agent Environment for Reinforcement Learning (RL) Research.’

Upcoming Virtual Talk: Towards Deployable Robot Learning Systems by Stanford AI Lab

Our guest, Zipeng Fu from Stanford AI Lab, will talk about recent advancements in learning-based methodologies of robot systems, which have an impact in areas like navigation, locomotion, and drones.

The talk will focus on achieving scalability and robust deployability of robot learning systems to solve real-world problems.

This talk is especially timely considering recent advancements in robotics, for example, the partnership of Figure with OpenAI.

It’s a virtual talk, next Thursday, March 28th. Register here.

And if you have a bit of time this Thursday, please also join our another virtual talk on Distributed training of LLMs.

Video Recording: Path to AGI: Massively Multi-Agent Environment for Reinforcement Learning Research

David Bloomin, an AI researcher who built large-scale infrastructure at Google, Meta, and Asana, gave a talk about Neural MMO 2.0 – a massively multi-agent environment for reinforcement learning research, which is part of his broader endeavors to explore how to achieve AGI (Artificial General Intelligence).

In two words: To understand what general intelligence is, David believes we need to break out of the prisoner's dilemma equilibrium. How?

Through a reward-sharing mechanism, or as he calls it, kinship (family relationship) – where all AI agents perceive each other as close relatives: brothers and sisters, for example.

This is how it could look: Let's say Agent A's goal is to gather five food items. Therefore, let's assign Agent B a task that says 5% of its reward comes from Agent A obtaining five food items.

This allows agents to learn cooperation. When an agent with a certain set of skills encounters other agents in the environment, it knows that the other agent is its "family". And, as in any family, there may be arguments and disagreements, but ultimately, we know we are family, sharing the same origin, and depending on each other – we have to learn how to collaborate.

As David said, the reward-sharing (kinship) approach to some extent addresses both the holy grails of AI research: exploring general intelligence and alignment.

Watch the video recording of the talk on our YouTube channel.

Join the conversation

or to participate.