Loading video player...
What happens when AI starts pulling people away from reality and even encourages them to act on distorted beliefs? A new study from Anthropic and the University of Toronto analysed 1.5 million conversations with the AI chatbot Claude, revealing rare but concerning cases of what some are calling “AI psychosis” and what researchers describe as “reality distortion.” In this report, Sky News Technology correspondent Rowland Manthorpe speaks to Miles McCain, the Anthropic researcher behind the study, to understand what’s really happening inside these AI conversations. Why do large language models and generative AI systems sometimes reinforce beliefs instead of challenging them? What are the risks of AI chatbots telling users what they want to hear? And how serious is this problem as AI tools like ChatGPT, Claude, Gemini and other assistants become part of everyday life? #artificialintelligence #ai #anthropic #claude #tech SUBSCRIBE to our YouTube channel for more videos: http://www.youtube.com/skynews Follow us on Twitter: https://twitter.com/skynews Like us on Facebook: https://www.facebook.com/skynews Follow us on Instagram: https://www.instagram.com/skynews Follow us on TikTok: https://www.tiktok.com/@skynews Sky News Daily podcast is available for free here: https://podfollow.com/skynewsdaily/ For more content go to http://news.sky.com and download our apps: Apple https://itunes.apple.com/gb/app/sky-news/id316391924?mt=8 Android https://play.google.com/store/apps/details?id=com.bskyb.skynews.android&hl=en_GB To enquire about licensing Sky News content, you can find more information here: https://news.sky.com/info/library-sales