BACK TO THE FUTURE
Ethical AI in mental healthcare: why we need a care-based approach

Predictions based on algorithms and using artificial intelligence are likely to be biased against those individuals ignored when statistically useful patterns were established. This calls for care ethics.
My whirling mix of encounters with artificial intelligence (AI) these past few months felt like the lengthy centre of G Willow Wilson’s novel The Bird King: escape, pursuit, capture, escape, (more) pursuit…
The journalist who believed the news they didn’t write. The encounter with the clock that gifts a new poem every minute of the day. The woman who envisioned – and conjured – utopian cannabis-themed scenes in the style of Vivian Maier.
Every now and then, I find myself in a theoretical chokehold, loosened only by the odd flurry of doubt seeping through the pronouncements of contemporary heralds.
You see, The Bird King is a meta-textual Bildungsroman for the ages, a historical fantasy about alternative communities, discrimination and the price of freedom.
It is also the only metaphor I could muster following my gripping passage through several streams of consciousness slipping in and out of the fascinating and wholly bewildering wilderness that is AI.
Statistical discrimination
It’s safe to say that AI has “emerged” and been silently structuring our lives ever since. In many spheres of public life, human autonomy has already been replaced by algorithm decision-making. Whether in part or in full, machines are now deployed to use data drawn from “previous patterns” and they are “making predictions” (think near-prehistoric predictive analytics) that could alter your life. Yes, AI can be the ultimate arbiter, deciding who gets the job, the home loan and even parole.
In these scenarios, it is argued that the result of deciding the outcome of any one particular “application” is based on whether or not the applicant “fits” into a multitude of previously established patterns, which were created using existing data.
This is where scholars in the field of ethics are starting to object – and with good reason too. Scholars such as Carolina Villegas-Galaviz, Solon Barocas and Andrew Selbst are arguing that data mining can constitute a form of “statistical discrimination”, whereby the use of AI also reproduces past prejudices when it identifies aforementioned previously established patterns.
“When new data is analysed according to a specific model, it may be ignoring vulnerabilities and specific circumstances that could be essential to decide morally,” says Villegas-Galaviz.
This means that using AI to determine who the beneficiaries of funding allocations, treatment provision or development programmes ought to be may adversely affect some of the world’s most vulnerable people.
Researchers are already pointing out the “disparate impact” that AI decisions create when people “do not fit into the pattern” and some, like Villegas-Galaviz, are presenting care ethics as a moral grounding for the AI era.
The ethics of care
To many people, ethics can be an obscure subject. But what makes the ethics of care – or care ethics, as some scholars refer to it – compelling in the context of a subject like AI is that it is completely premised on the inherently relational and responsivity aspects of being human.
Psychologist Carol Gilligan, the earliest documented proponent of care ethics, proposed it as a moral theory in which aspects related directly to the human condition, such as “personal relationships”, “vulnerability” and “responsibility”, are identified as cornerstone tenets.
Gilligan argued that the general idea of care is to “understand responsibility and morality in the context of relationships”. In other words, even moral dilemmas must take account of the fact that we are vulnerable and (in many ways, inherently) dependent on one another.
A theory that puts vulnerability and relationships in the foreground would “better identify wrongs of AI decision-making” and thereby the moral implications of algorithms, posits Villegas-Galaviz.
Read more in Daily Maverick: Plethora of concerns on issues of accountability, transparency and privacy in AI content moderation
A natural phenomenon
The origin of ethical action can be found in the natural caring sentiment as well as the earliest memory of being cared for. Care as an ethic is actually quite a natural thing.
At the outset, we are dependent on one another to develop our most basic abilities, and throughout our lives, most of us receive forms of “care” in many facets – so much so that we logically become obliged to care about other people in one form or another.
More than a decade ago, the World Health Organisation identified people living with mental illnesses as “vulnerable” owing in a large part to the fact that the barriers to accessing and affording mental healthcare services are especially devastating to most people who find themselves in this section of the world’s population. In particular, it has also been pointed out that groups, such as people living with mental illnesses, share common challenges, including stigma and discrimination.
It is inevitable that AI will be deployed to find out how to close the gaps for people living with significant medical and social support needs. If AI is going to assist future healthcare practitioners with their research, or as one psychiatrist put it, “sorting through data to find new patterns that may help us better understand how mental illnesses develop”, it is essential that statistical discrimination is eradicated at the outset.
To this end, researchers are arguing for a care-based grounding, which may account for “all kinds of patterns”, including recognising those considered vulnerable. In healthcare settings, AI can be preset so that a care ethics-based development and deployment of it also focuses on “the voices being silenced”, the “interdependent relationships” affected, and whether the data used could imply “exploiting the vulnerabilities” of those affected by any one algorithm.
If we are to develop tools that will significantly enhance the full spectrum of human life, we have to look at the patterns we know and consider that which we don’t know.
We have to ask the kinds of questions that can only be answered by those who hardly speak. We have to base our every action on the notion of care. A great number of care ethics scholars posit that the hidden paths folding the distance between here and our next destination can only be found through this human principle. DM168
Florence de Vries is a communications specialist and journalist whose primary research interests are in the fields of mental health and the ethics of care.
This story first appeared in our weekly Daily Maverick 168 newspaper, which is available countrywide for R25.

No robots, we got enough problems with the human species