• BuzzRobot
  • Posts
  • Exploring fundamental problems of LLM safety with a Cambridge University researcher

Exploring fundamental problems of LLM safety with a Cambridge University researcher

Hello, fellow human! If you remember, according to Buddhists, life is suffering. With more advanced, capable, and at some point agentic AI, human suffering may increase due to fundamental AI safety issues, potentially even leading to the extinction of humanity.

On that positive note (demonic laugh), I'd like to invite you to join our virtual talk this Thursday, August 1st, with a researcher from Cambridge University who co-led a work with over 35 researchers from NLP, AI safety, and AI ethics fields. The talk will explore the fundamental challenges of LLM safety and whether these issues can be addressed.

We’ve been thinking of starting programming live streams on our YouTube channel, focused on "How To" topics like how to implement an LLM from scratch with practical step-by-step guidance. To make our live streams more useful for you, we’ve prepared this short survey where you can share what practical aspects of AI you'd like to learn. All your ideas and suggestions are very welcome.

Check out previous talks by our guests on the BuzzRobot YouTube channel

Join our Slack community to connect with fellow developers, AI engineers, researchers, and practitioners.

Reply

or to participate.