/file/dailymaverick/wp-content/uploads/2025/09/label-Opinion.jpg)
Access to health information has increased rapidly, but we may be more misinformed than ever. Recently, I met my favourite e-patient, someone so health-literate and involved in her own care that even an experienced clinician like myself can feel outpaced. But she is the exception.
Many patients are gaining confidence, but often misunderstand complex medical information.
Millions of people turn to search engines like Google and AI platforms like ChatGPT for simple to complex health questions. This has created “instant health experts”, people with access to information but not necessarily the tools to interpret it safely. AI tools can assist clinicians by analysing symptoms, summarising medical literature, and enabling earlier detection of rare diseases. They can also contribute to personalised treatment planning and accelerate drug discovery, highlighting the value of AI in healthcare.
The myth of ‘smart’ information
An e-patient is not just someone who Googles symptoms; they are an electronically empowered health consumer who actively takes part in their own medical care, often using the internet, digital tools, and health data to stay informed and engaged. This distinction matters because the “e” stands for: Equipped, Enabled, Empowered, Engaged, Electronic and an Equal partner in care.
Empowered patients actively seek credible information and collaborate with clinicians, while uninformed consumers rely on unverified, algorithm-driven outputs for self-diagnosis and treatment. However, access to information does not equal understanding. The digital world prioritises visibility over accuracy, and the top-ranking results are not always correct.
These convincing tools boost confidence more than competence. With millions turning to AI for health advice, this widespread behaviour has important public health consequences, especially within a system characterised by low health literacy, high demand and many needs that are delayed or remain unmet. AI systems can sometimes “hallucinate”, producing responses that seem confident and authoritative but may be entirely fabricated, outdated or incorrect.
Evidence of harm: what global cases are showing
Recent global reports expose the real-world effects of misplaced trust in digital health tools. A Guardian article highlighted cases in which AI systems failed to recognise medical emergencies, leading to delayed care or inappropriate advice. NBC News reported a family claiming that interactions with OpenAI ChatGPT contributed to a teenager’s suicide.
These are not isolated incidents. Users should remember that AI systems lack clinical accountability; they can’t perform physical examinations or connect the dots to interpret social cues or body language. When these tools are viewed as replacements for clinical care and lack professional human oversight, the risks increase and can be fatal.
Privacy, oversharing and the ‘online disinhibition’ trap
Beyond clinical risk, there’s a quieter, more insidious threat: psychological and informational vulnerability. The online disinhibition effect explains why people share more personal information online than in person. Anonymity, invisibility and perceived safety create a false sense of intimacy.
In healthcare, this risk becomes especially significant when individuals share sensitive medical and psychological information, believing it is confidential within a false professional-patient relationship. Platforms collect, store and may use this data in ways that could be exploitative, and privacy is not guaranteed. Essentially, people might engage with some of these platforms as if they were healthcare professionals, but they could actually be vulnerable to data exploitation and missing out on proper therapeutic care.
The South African context
In South Africa, these risks are heightened by structural challenges such as youth vulnerability, an overstretched public health system, unequal access to mental health services, and limited digital literacy. South Africans spend a daily average of 9 hours and 24 minutes online, placing the country at the top of global internet-use rankings. It’s among the top 10 users of ChatGPT worldwide, and these factors expose millions to both the benefits and the unchecked dangers.
This creates a high-risk environment where digital tools may fill a gap, but not safely.
From access to accountability
We’re at a crucial point. The question is no longer whether people will adopt digital health tools, but whether we’ll ensure their use is safer.
To achieve this, we need coordinated action:
- Integrate AI literacy into existing health education programmes.
- Strengthen regulation and accountability for AI use in healthcare.
- Embed mental health and privacy protections into AI design.
- Health professionals should actively engage patients as informed partners, teaching them practical online safety skills such as fact checking, recognising red flags, and knowing when to seek clinical care.
Information is abundant, but our safety depends on the depth of our questions and the safeguards we put in place to protect the vulnerable. As we move into a digital future, we must remember that technology is not neutral, but serves its creators and owners, and that we must use it critically and cautiously.
Access doesn’t equal understanding, and without the skills to interpret information, it can mislead rather than empower. In healthcare, that gap can be dangerous. DM
