When Data Meets Judgment: The Ethics of AI in Subjective
Medicine
Introduction
Artificial intelligence has changed modern medicine by excelling in data-intensive areas where patterns are clear and outcomes can be measured. From imaging diagnostics to glucose forecasting, machine learning systems enhance and in some cases outperform human capacity in processing structured and quantifiable information.
However, this computational strength reveals a critical limitation when AI enters areas of care that are interpretive and relational. Fields such as psychiatry, which rely on empathy, contextual understanding, and moral judgment, cannot be reduced to data points alone.
The tension between calculation and care highlights the broader challenge of integrating AI into areas of healthcare that depend on measurable precision, but on human understanding.
Why AI Works Well in Data-Heavy related tasks but struggles in interpretive care
AI performs extremely well in data-heavy tasks because these areas rely on large volumes of structured, quantifiable information that can be statistically modelled and optimized . When patterns are clear and outcomes can be verified, such as in imaging analysis or glucose forecasting, machine learning algorithms can efficiently detect correlations and generate accurate predictions far beyond human capacity.
However, AI struggles in interpretive or relational care, where success depends on contextual understanding, empathy, and ethical judgment rather than stable data patterns. Interpretive care requires understanding of patient emotions, social dynamics, and cultural meaning, all of which cannot be reduced to numbers. Unlike structured data tasks, there are no fixed “right” answers, and the nuances of communication, trust, and moral reasoning cannot easily translate into computational terms.
So, while AI thrives in areas driven by patterns, it falters where human connection and interpretive understanding are essential.
The Limits of Algorithmic Understanding With Emotional or Moral Decision-making
Unlike many medical specialties that rely primarily on measurable biological indicators (e.g., lab tests, imaging, biomarkers), psychiatry works with thoughts, emotions, behaviours, and relationships, things that cannot be directly observed or objectively quantified . Therefore, diagnosis and treatment depend heavily on interpretation, clinical judgment, and the patient’s own story..
Dr. Laura Sikstrom’s research on predictive care in emergency psychiatry highlights how algorithmic systems, while capable of processing large amounts of clinical data, remain limited in capturing the moral and emotional dimensions that shape human judgment. Sikstrom and colleagues note that machine learning models trained on electronic health records (EHRs) risk amplifying existing social and clinical biases, especially towards marginalized groups such as racialized patients or those brought in by police for risk assessment in acute psychiatric settings; that is the process used to evaluate the likelihood of a patient causing harm to themselves or others. Because such models turn behavioural and emotional cues into “risk factors,” they reduce complex relational and ethical contexts into simple data points, erasing the interpretive nuance clinicians depend on when assessing distress or intent.
Using computational ethnography, Sikstrom shows the gap between data-driven prediction and the lived realities of psychiatric care, showing that moral understanding in psychiatry depends on empathy, context, and dialogue, forms of knowledge that algorithms cannot replicate.
While AI can identify correlations, it lacks the capacity to engage with the affective, uncertain, and ethically charged work that defines human care.
The Unequal Impact ofthe Use of AI in Healthcare
The ethical challenges of AI use in psychiatry are not distributed evenly. Those most impacted by algorithmic decision making tend to be individuals whose experiences fall outside the linguistic, cultural, and demographic norms represented in the training data.
Much of AI use in the mental-health context relies on natural language processing (NLP), which attempts to deduce emotional states or mental conditions from large datasets of expressive language, collected from social media, Wikipedia, and other digital platforms. However, language is never neutral; it is shaped by personality, gender, culture, and social context. This means that the same symptom may be expressed very differently around the world.
Every culture has its own “idioms of distress,” familiar ways of referring to pain or similar symptoms within a community. In many communities, mental health symptoms are communicated through bodily sensations, such as heaviness, buzzing, or fatigue. These expressions may fall outside the vocabulary system used in Western mental-health contexts, and because of this, be missed by AI supporting this type of medical care.
As AI models often rely on English-language data, they struggle to recognize culturally specific ways of communicating mental distress. This means that people whose symptoms do not align with the model’s training data are at greater risk of being misdiagnosed or not diagnosed at all.
Who takes on Responsibility When AI Influences Life-altering Decisions.
In psychiatric practice, decisions can change a person’s safety, wellness, and life. Although AI systems are not currently making final decisions about mental-health care, the possibility of this happening in the future is real and growing, especially as AI-driven chatbots and support tools are 8
increasingly introduced as quasi–mental-health interventions. This is why it is crucial to address these issues now, before AI becomes further integrated into mental-health systems without proper oversight and begins influencing decisions with serious clinical and ethical consequences.
As AI use increases, the question of who is responsible for its diagnosis, suggestions, and classifications becomes a prominent issue. There appears to be a diffused sense of responsibility between the clinician, the institution deploying the tool, and the developer. Most AI regulations today focus on data and privacy, but there are limited rules surrounding how AI affects human relationships or emotions, which are at the core of mental health care.
From an ethics of care perspective, developers of an AI tool must take on a continuing responsibility by monitoring the outputs, adapting the tool when it becomes harmful or biased, and involving patients and clinicians in the design process. When AI is used in interpretive care, it can respond to and direct emotions, which invite manipulation if not regulated adequately.
Recent recommendations on the use of emotional (or interpretive) AI suggest the following policies: respect human dignity, do not exploit users’ trust, do not manipulate their emotions, do not assume that past emotional expressions reliably predict future mental states, and recognize that emotions and their expressions are culturally diverse.
Ultimately, these guidelines make clear that AI in interpretive care demands careful, intentional implementation if it is to improve mental-health care rather than compromise it.
Conclusion
The use of AI in medicine provides powerful possibilities, but its limitations are clear when deployed for use in interpretive care, which typically depends on emotional cues, cultural associations, and human relationships. If developed with accountability, cultural awareness, and oversight, AI can enhance mental health care. Without these safeguards, AI risks reproducing the inequities it aims to solve. With no formal regulations proposed by the Canadian government, a clear question remains: how can we regulate AI in health care while protecting the vulnerabilities at the core of interpretive care?
Professional Opinions:
“Artificial intelligence (AI) is poised to radically change healthcare, but comes with its challenges, which can translate into harm if not carefully developed and deployed. We’ve already seen tremendous benefits of the application of AI in the healthcare setting, from AI scribes that save clinicians time as well as patient monitoring systems that have been shown to reduce unexpected mortality. However, we’ve also seen harm, including amplification of racial bias through widespread use of biased AI solutions and human suicides that have been linked to AI chatbots. A thoughtful, responsible approach to developing and deploying AI solutions is encouraged so we can capitalize on its tremendous potential to help society rather than harm it.”
Muhammad Mamdani, PharmD, MA, MPH (he/him)
Director – University of Toronto Temerty Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM)
Clinical Lead (AI) – Ontario Health
Vice President – Data Science and Advanced Analytics, Unity Health Toronto
Faculty Affiliate – Vector Institute
Professor – University of Toronto
“This article offers an outstanding and timely synthesis of the limits of today’s AI systems in mental health care, a topic deeply resonant with my own work at the forefront of AI in medicine. The authors clearly articulate why current models fall short in relational, interpretive, and emotionally complex clinical contexts, precisely the areas where trust and nuance matter most. Their analysis underscores the urgent need for deeper human-AI interface research so that future adaptive AI models can flexibly align with diverse clinical realities in a way that feels safe, trustworthy, and genuinely helpful to clinicians.
I particularly applaud the authors for illuminating the inequities and ethical tensions surrounding AI in psychiatry, and for summarizing the state of the field with clarity and rigour. Most importantly, this piece serves as an excellent springboard for the next question we must all confront: now that we understand the limitations so clearly, how do we meaningfully address them so mental health care can finally realize the value AI promises?”
Noah Crampton, MD, MSc, CCFP
CEO Mutuo Health Solutions Inc, a Wellstar Company
“I think that AI has a lot of potential to benefit patients, health care providers, and health systems, matched by a lot of potential for harm. One of the biggest issues to watch out for is whether we are trying to solve complex social challenges in simplistic ways with technological tools – an approach sometimes called “technological solutionism”. We shouldn’t be blinded to the hard work of fixing systems of health care just because a new class of technologies is proving to do some amazing things. We also need to pay attention to the power we are offering to big technology companies as they become increasingly embedded in health care. I think that if we work on co-designing AI tools with community members and emphasize health systems that work for everyone, then we will be able to avoid some big mistakes.”
Jay Shaw, PT, PhD (He/Him)
Canada Research Chair in Responsible Health Innovation
Associate Professor, Department of Physical Therapy, University of Toronto
Associate Professor (Cross-Appointed), Institute of Health Policy, Management & Evaluation, University of Toronto
Research Director, Artificial Intelligence, Ethics & Health, Joint Centre for Bioethics, University of Toronto
Scientist, Women’s College Hospital
References
Alegre, Susie. 2025. “Need for Regulation Is Urgent as AI Chatbots Are Being Rolled Out to Support Mental Health.” Centre for International Governance Innovation, July 18, 2025. https://www.cigionline.org/articles/need-for-regulation-is-urgent-as-ai-chatbots-are-bein g-rolled-out-to-support-mental-health/…
Arai, Maggie. 2025. “Therapy Bots: Regulating the Future of AI-Enabled Mental Health Support — Schwartz Reisman Institute.” Schwartz Reisman Institute, November 6, 2025. https://srinstitute.utoronto.ca/news/2025-therapy-bots-brief…
Desai, Geetha, and Santosh K. Chaturvedi. 2017. “Idioms of Distress.” Journal of Neurosciences in Rural Practice 8 (S01): S094–97. https://doi.org/10.4103/jnrp.jnrp_235_17…
García-Gutiérrez, M. S. 2020. “Biomarkers in Psychiatry: Concept, Definition, Types and Relevance to Clinical Reality.” Biomarkers and Neuropsychiatry 11 (432): 1–14.
McStay, Andrew, and Pamela Pavliscak. 2019. “Emotional Artificial Intelligence: Guidelines for Ethical Use.” EmotionalAI.org..
Montemayor, Carlos. 2022. “In Principle Obstacles for Empathic AI: Why We Can’t Replace Human Empathy in Healthcare.” AI & Society 37: 1353–1359.
Paul, Debleena. 2021. “Artificial Intelligence in Drug Discovery and Development.” Drug Discovery Today 26 (1): 80–93.
Sikstrom, Laura. 2023. “Predictive Care: A Protocol for a Computational Ethnographic Approach to Building Fair Models of Inpatient Violence in Emergency Psychiatry.” Open Access, 1–10.
Straw, Isabel, and Chris Callison-Burch. 2020. “Artificial Intelligence in Mental Health and the Biases of Language-Based Models.” PLoS ONE 15 (12): e0240376. https://doi.org/10.1371/journal.pone.0240376…
Tavory, Tamar. 2024. “Regulating AI in Mental Health: Ethics of Care Perspective.” JMIR Mental Health 11 (July 20): e58493. https://doi.org/10.2196/58493…
A Note, From Us
Thank you for reading this edition of our newsletter, Synapse. We’re grateful for your continued support and excitement as we grow our community.
1. Syn·apse: a junction between two nerve cells, consisting of a
minute gap across which impulses pass by dif usion of a
neurotransmitter.
In other words, where connection happens; Between neurons,
between ideas & subjects, and between people. Through BUSA,
we connect peers, philosophy, and science, in ways that spark
thought, dialogue, and discovery.
Until next month,
– BUSA
BUSA’s Newsletter Team
Lauren Alberstat – Layout & Design
Sophia Fernandes. – Research Coordinator
Lucas Makhlouf – Research Coordinator
Ugo Izuka – Information Analyst & Editor







Leave a comment