Founder of the AI Responsibility Lab, Ramsay Brown on AI Safety, AI Coworkers & ChatGPT doing DMT

ChatGPT & Dall-E today - misinformation, bias & worker displacement tomorrow. In a world where artificial intelligence is becoming increasingly advanced, it's more important than ever to ensure that AI systems are aligned with our values and behave in ways that are safe for humanity. But what exactly does it mean to align an AI system, and how do we go about ensuring its safety?

Here to answer these questions and more (including the most important question: will AIs take psychedelics?) is Ramsay Brown. Ramsay is the Founder of the AI Responsibility Lab (AIRL). AIRL builds venture-scale deep tech solutions for AI Safety and AI Resilience. AIRL's flagship product, Mission Control, integrates Responsible AI training, AI Risk Management, and AI Governance orchestration to drive fairness, explainability, and trust throughout the entire AI lifecycle.

Check out AIRL here:

Follow Ramsay on Twitter:

5:30 - What are the major concerns in AI safety?

10:30 - Examples of AI gone bad

16:00 - What is the hardest part of AI alignment?

21:00 - What is Ramsay’s unique angle on AI alignment?

24:00 - Is the world ready for conversations about responsible AI?  

25:30 - Who is responsible for responsible AI?

32:00 - What regulations would Ramsay put into place?

41:00 - Eliezer Yudkowsky

47:00 - The era of synthetic labor

55:00 - The Turing Test is a red herring

1:04:00 - What jobs are safe from the AI takeover?

1:14:00 - Will machines do psychedelics?