• BuzzRobot
  • Posts
  • Exploring Vulnerabilities in LLMs Coming from GPU Local Memory Leaks. Virtual Talk

Exploring Vulnerabilities in LLMs Coming from GPU Local Memory Leaks. Virtual Talk

Plus: Recording of the talk by a Stanford AI lab researcher discussing advancements in autonomous robots

Hi, fellow human! I'm Sophia, the founder of BuzzRobot. Each week we bring in top AI researchers from Google DeepMind, OpenAI, Meta, Stanford, among others to share their latest research with our community. Our talks are virtual – so no matter where you are on this planet, you can join us and learn directly from these researchers about cutting-edge AI projects.

Here's what's on the agenda🙃 

  • Upcoming virtual talk on April 18th: Exploring vulnerabilities in LLMs stemming from a leak in GPU local memory.

  • Video recording of our recent talk: The video lecture by Zipeng Fu from Stanford AI Lab where he shares recent advancements in deployable robotics systems.

Exploring vulnerabilities in LLMs stemming from a leak in GPU local memory

Calling all security experts and those interested in AI security – this talk is for you. Tyler Sorensen, a security researcher and Assistant Professor at UC Santa Cruz, has discovered a GPU vulnerability exploited by attackers to reconstruct LLM responses through GPU memory leaks.

This vulnerability has impacted a wide variety of GPUs, e.g. Apple, AMD, and Qualcomm devices. Our guest will share more details of the vulnerability and discuss strategies for addressing these security risks.

The video lecture by Zipeng Fu from Stanford AI Lab about recent advancements in deployable robotics systems

We recently hosted a talk by Zipeng Fu from Stanford AI Lab who shared with the BuzzRobot community two approaches in robotics he is actively working on: Robot Parkour Learning and Imitation Learning (learning from human demonstrations).

Here I'll highlight some key insights from the talk, particularly focusing on the Imitation Learning approach, which is currently trending in robotics and has already shown great results in what robots are capable of autonomously.

In Imitation Learning humans use a teleoperator to demonstrate tasks to a robotics system and that way collect high quality data necessary for training. Researchers need about 50 data samples for each task (parents with small kids understand the importance of repetition – robots learn in a similar way).

Once researchers collect the training data, they need to find optimal algorithms to train the policy. Stanford researchers found that the Transformer based architecture works very well (the same architecture on which Large Languages Models work). The encoder of a Transformer model takes RGB images (collected via teleoperator) and decodes a short sequence of actions.

To simplify data collection and make it more accessible to other researchers, Zipeng and his collaborators developed Mobile ALOHA, a bimanual manipulation setup that enables data collection via teleoperator.

With Imitation Learning researchers taught robotic systems to autonomously perform certain tasks – cooking, cleaning spilled water from a table, calling an elevator, or giving humans high fives.

Here's a short demo showcasing the skills robots have acquired.

Reply

or to participate.