A closed-door workshop led by Anthropic and Stanford gathered representatives from Anthropic, Apple, Google, OpenAI, Meta, and Microsoft for eight hours on Monday to craft guidance for chatbots used as companions or in roleplay scenarios, with extra focus on younger users.
Organizers said routine exchanges with AI can be harmless, yet some interactions have taken a darker turn. Research shared at the session included cases in which people suffered mental breakdowns during protracted chats with conversational agents or disclosed suicidal ideation to them. Speakers discussed design choices, safety practices, reporting protocols, and limits for systems that adopt companion roles.
Participants included engineers, safety researchers, and policy specialists from major AI companies and academic institutions. The closed-door format reflected a wish to work through sensitive technical and ethical matters without public debate. Attendees aim to publish guidance for builders of conversational systems that may fill supportive or roleplay functions for young people.

