Amanda Askell, AI safety researcher at Anthropic, joins Eric Newcomer to break down one of the biggest and most uncomfortable questions in tech right now: could AI systems like Claude become conscious, and if they do, what do we owe them?
They discuss why treating AI systems poorly might matter more than people think, how researchers are approaching questions of AI consciousness, and why some of the biggest fears about artificial intelligence are not the ones most people are talking about.
The conversation also explores the future of AI alignment, the risks of getting it wrong, and how Silicon Valley is thinking about building powerful systems responsibly.
Watch the full episode for a deeper look at where AI is headed and the ethical challenges that come with it.
Subscribe for more conversations with the people shaping technology, startups, and the future.
