One of the most controversial issues in the AI industry over the past year was what to do when a user displays signs of mental health struggles in a chatbot conversation. OpenAI's head of that type of safety research, Andrea Vallone, has now joined Anthropic.
"Over the past year, I led OpenAI's research on a question with almost no established precedents: how should models respond when confronted with signs of emotional over-reliance or early indications of mental health distress?" Vallone wrote in a LinkedIn post a couple of months ago.
Vallone, who spent three years at OpenAI and built out the "model policy" research team there, worked …
Read the full story at The Verge.
Sign in to read the full article.
Sign in with Google