People of ACM - Ashish Sharma
July 8, 2025
How did you initially become interested in human-AI collaboration?
For me, the true potential of AI isn't just about training more powerful models, but also about understanding how these systems can genuinely help humans. Human-AI collaboration opens up incredible possibilities for leveraging the complementary strengths of both humans and AI to augment human functioning. One of my key J.C.R. Licklider, who in the 1960s envisioned Man-Computer Symbiosis by predicting a future where “human brains and computing machines will be coupled together very tightly and that the resulting partnership will think as no human brain has ever thought.” This fascinating concept led me to question how we can develop AI systems specifically trained to collaborate efficiently and productively with humans on real-world tasks. I found this collaborative approach especially crucial in critical applications like healthcare. In such fields, human-AI collaboration could prove more effective and reliable than attempting to fully replace humans with AI.
motivations came fromWill you provide two examples of how someone with a mental health challenge would use your technology:
A. Someone with a mental health challenge that was working with a human peer?
B. Someone with a mental health challenge that was alone?
When a human peer is supporting a person with a mental health challenge, our technology acts as a powerful tool to help that peer communicate more effectively. Human peers are strongly motivated by their own experiences to help others, but they often lack the training of a professional therapist, particularly in skills like expressing empathy. We developed a reinforcement learning-based method that can understand, measure, and give real-time feedback on how empathy is expressed in online peer-to-peer mental health support platforms. For example, if someone shares that their job is increasingly stressful, a peer might respond with “Don't worry.” Our system would identify this as a potentially invalidating phrase and suggest an alternative, like “It must be a real struggle,” to better acknowledge the person's feelings. It might also prompt the peer to ask questions, such as “Have you tried talking to your boss?” to encourage a deeper exploration of the person's experience rather than offering a quick dismissal of their concerns.
When someone is alone and dealing with a mental health challenge, our technology enhances self-guided interventions through human-AI collaboration. While self-guided mental health interventions such as tools to journal and reflect on negative thoughts are promising, their effectiveness without the assistance of a professional therapist is limited. We focus on Cognitive Restructuring—an evidence-based self-guided intervention for overcoming negative thinking—and use AI to make it less cognitively challenging and less emotionally triggering. Here’s how it works:
A person first describes a negative thought, the situation that caused it, and their emotions. For instance, after a research project failed, a PhD student might think, “I’ll never complete my PhD.” Next, instead of asking them to identify complex “thinking traps” from a long list, our AI analyzes their thought and suggests the most likely ones (e.g., “catastrophizing”). Finally, AI helps them rewrite their negative thought. It generates several thought reframes, such as, “This project was a setback, but it is just one step in my PhD journey.” The person can then choose one, edit it to feel more personal, or ask the AI for more help to make the new thought more actionable or empathic.
In this work, you are credited with developing new machine learning models and algorithms which demonstrate human understanding. Without going into too much detail, what was your key technical insight in inventing these new algorithms?
The key technical insight behind these new algorithms was fundamentally about bridging the principles of human psychology with the mechanics of machine learning models.
Specifically, for making conversations more empathetic through machine learning models, we turned to established psychological theories of empathy. The core innovation was translating these theories into a quantitative reward signal that a Reinforcement Learning model could use. By creating a reward model that could differentiate between low- and high-empathy conversations based on these psychosocial frameworks, we could then train an AI agent to learn precisely which types of edits would make a conversation more empathetic.
Similarly, when helping users reframe negative thoughts, our insight was to define what makes a reframe helpful based on psychological principles and data from human experts. By training the model on expert examples, we were able to create methods that can control specific linguistic attributes of the generated text, ensuring that the output is relatable, helpful, and easy to remember.
Speaking more broadly, how do you see this technology advancing in terms of developing even deeper human understanding and/or being applied in other areas?
A critical pathway I see for advancing this technology would be through developing methods that can deliver sustained, longer-term impact. Current human-AI collaboration systems excel as short-term, in-the-moment assistants. As shown in my dissertation work, they can assist individuals in short-term tasks such as rewriting messages to make them more empathetic or suggesting more helpful ways of thinking. However, for mental health and well-being, long-term outcomes are critical. While an intervention might initially regulate emotions and reduce the symptoms of a mental illness, it is the sustained impact over time that truly reveals its effectiveness. Developing a system that delivers long-term intervention would require a deeper understanding of a user's evolving skills, intentions, and goals over time, offering a personalized experience that prevents symptoms from resurfacing.
A major part of this deeper understanding would also involve creating more robust safety frameworks. Current AI safety benchmarks are too broad and don't account for the nuanced risks faced by individuals with mental health challenges. A phenomenon called “pathological helpfulness” can occur, where an LLM's response, while seemingly helpful, could inadvertently reinforce harmful behaviors—for example, giving weight-loss advice to someone with an eating disorder. Future technology must focus on fine-grained safety evaluations that consider individual vulnerabilities to prevent such harm.
Also, my thesis modeled key human-AI collaboration behavior for mental health and well-being. I envision this research being adapted to other domains, such as to create tools that provide nuanced feedback on writing and communication, facilitate personalized skill development for students, combat the spread of misinformation, and even unlock new levels of workplace productivity and creativity. Furthermore, the underlying principles are not limited to text-based interactions and can be extended to in-person contexts. For instance, our system is designed to help people express empathy more effectively, which in turn could be used to improve face-to-face communication. Similar human-AI collaboration approaches could support students with personalized feedback as they learn new skills.
Ashish Sharma is a Senior Applied Scientist at the Microsoft Office of Applied Research. He recently earned his PhD at the University of Washington. His work explores ways to model and understand the behaviors and skills of both humans and AI systems, combining techniques from natural language processing, reinforcement learning, data science, psychology, and mental health.
Sharma recently received the ACM Doctoral Dissertation Award for his dissertation "Human-AI Collaboration to Support Mental Health and Well Being." In his dissertation, he developed fundamental advances in natural language processing to positively impact the mental health of large groups of people. Sharma’s work has been used by several organizations including Mental Health America.