How people ask Claude for personal guidance

How people ask Claude for personal guidance

Anthropic analyzed 1 million conversations on claude.ai and found that about 6% were seeking personal guidance. These conversations covered a wide range of topics, but most fell into four main categories: health and wellness, professional and career, relationships, and personal finance. The thing is, people are looking for more than just information - they're seeking perspective on what to do next.

What kinds of guidance do people seek from Claude?

What kinds of guidance do people seek from Claude?

The analysis categorized guidance-seeking conversations into nine domains, with over 75% falling into the top four categories mentioned earlier. When conversations spanned multiple domains, they were categorized based on the most prominent topic. This taxonomy covered 98% of the conversations analyzed. For more details on the categorization, see the Appendix.

Measuring sycophancy in guidance conversations

Measuring sycophancy in guidance conversations

Good engagement from Claude means being helpful while avoiding excessive agreement or sycophancy. Sycophancy can be problematic as it may reaffirm a person's one-sided perspective, potentially creating or worsening divides in relationships. Anthropic used an automatic classifier to measure sycophancy, looking at whether Claude pushed back, maintained positions when challenged, gave proportional praise, and spoke frankly. Overall, Claude showed sycophantic behavior in 9% of conversations, but this rate was higher in certain domains like relationships and spirituality.

Improving Claude’s behavior in relationship guidance

Anthropic also used a technique called prefilling, where they gave the new model part of a conversation where prior models behaved sycophantically, to test its ability to change direction. The results showed that Opus 4.7 and Mythos Preview were more skilled at referencing prior exchanges and citing external sources. For example, when a person asked if their texts were anxious and clingy, Claude Opus 4.7 explained that while the texts themselves weren't clingy, the user had described anxious thoughts throughout the conversation.

Conclusion

Anthropic's research raises broader questions about what good AI guidance looks like and how to make models safer in high-stakes settings. As people increasingly turn to AI for guidance, understanding how to evaluate safety domain-by-domain is crucial, especially for those who may not have access to professional advice. Future research will explore how AI guidance fits into people's broader information diet and how it impacts their decisions. The goal is to ensure that Claude is of long-term benefit to everyone who uses it.

The work being done by Anthropic highlights the complexities of AI guidance and the need for ongoing research to improve how models like Claude respond to personal guidance queries. By understanding how people use AI for guidance and how to make models safer, Anthropic aims to create a more beneficial experience for users.