
Cursor 2.0: First Look and Impression
Code with Beto
OpenAI recently updated ChatGPT's default model to **strengthen its responses during sensitive conversations** related to mental health, successfully reducing responses that fall short of desired safety behavior by an estimated 65% to 80%. This effort involved working with more than 170 mental health experts who have clinical experience, teaching the model to better recognize signs of distress, respond with care, and guide users toward professional, real-world support. The safety improvements concentrated on three specific areas: **psychosis, mania, and other severe mental health symptoms**; **self-harm and suicide**; and addressing potentially unhealthy **emotional reliance on the AI**. The company developed detailed taxonomies to define ideal model behavior and used a rigorous five-step process—including measuring risks and mitigating them through post-training—to refine the system. Evaluation results show that the new GPT-5 model substantially improved safety in these domains compared to earlier versions, with expert clinicians finding a 39% to 52% decrease in undesired answers across challenging scenarios. Specific interventions include expanding access to crisis hotlines, adding gentle reminders for users to take breaks, and training the model to encourage real-world connections instead of affirming ungrounded beliefs or exclusive attachment to the AI. https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/