Skip to Content, Navigation, or Footer.

Narancic ’27: We all asked ChatGPT for health advice — here’s why that’s risky

Screen Shot 2025-09-23 at 11.53.17 PM.png

Every day, we ask AI what to eat for dinner, how to structure an essay and even how to navigate breakups. It is no surprise, then, that more and more people are turning to ChatGPT for something far more serious: medical advice. In fact, a 2024 KFF survey found that about one in six U.S. adults now use AI chatbots at least once a month to find health information, and since then, the user growth of these platforms has grown exponentially. While it may be tempting to turn to AI for medical advice, it is important that we remember our questions are going to algorithms not trained professionals and with that comes significant risk.  

College students are particularly vulnerable to this trend. Moving away from home for the first time means leaving behind the family doctors and pediatricians we grew up with. Suddenly, students are expected to manage their health independently. In this new environment, waiting times at campus clinics may be long, health care costs may feel overwhelming and personal issues are often hard to share face-to-face. So it is only logical to see students increasingly rely on AI to interpret symptoms, suggest medications and answer questions that would normally require a doctor. Yet, there is an uncomfortable reality lurking beneath that convenience. 

Studies suggest that AI medical triage advice — assessing how urgent a medical issue is and what the next step may be — can be unsafe in as many as 40% of cases. Even one wrong answer can mean the difference between timely treatment and life-threatening delay.The same issues may apply to diagnosis, with generative AI models achieving significantly worse diagnosis rates when compared to expert physicians. The danger is in the confidence with which AI makes its initial diagnoses as well as in its limited accuracy. Patients may accept the top recommendation as fact when it is conveyed with authority, and if this is not the case, the repercussions could be dire.

Using chatbots for mental healthcare is equally concerning. As Generation Z continues to struggle with alienation and lack of social connection, it is unsurprising to find that students are turning to digital solutions like AI to solve their personal anxieties instead of seeking out  face-to-face resources.  A recent Stanford study found that AI therapists can show stigma towards specific mental health conditions and are also ill-equipped to respond to serious problems like suicide. The risks that come with receiving poor advice are enormous. This was seen in late August, when one California teenager took his own life after ChatGPT validated his suicidal thoughts.

ADVERTISEMENT

AI tools are clearly still not on par with human medical experts and carry immense risk. If students consider AI answers as adequate substitutes for professional judgment, we risk delayed diagnoses, inappropriate medication use, mental health issue aggravation or false reassurance. And when something goes wrong, AI cannot be held accountable. If we entrust our minds and bodies to AI in our most fragile moments, who takes responsibility when the consequences are life or death? The answer is no one. AI can produce plausible but incorrect answers with almost no disclaimers. The only one who will reap the consequences of misguided advice is the user themself.

It is impossible to ignore that this issue disproportionately affects minority communities who have been historically mistreated by the medical industry. The gap that pushes us — especially students — towards AI is proof that we need health care infrastructure that is more accessible, affordable and reliable, that serves people and not the financial interests of companies. AI is not the solution to the problem, but the indicator of it.

Of course, the story is not all bleak. AI can and certainly will have a role in health, but that role must be limited. AI can provide general education, create symptom checklists and generate reminders to seek professional care. It can be integrated inside health care systems to increase accessibility and support. In due time, with LLMs becoming more and more advanced, we might see more accuracy in other uses as well. As for now — AI can complement, but it cannot replace. 

So what should we do? Students must approach AI health advice with skepticism, understanding both its usefulness and its limits. Universities should openly address these risks, perhaps even including digital health literacy into orientation programs. And we should demand that companies, not users, bear responsibility for closing the gaps in access that push us toward risky shortcuts in the first place. 

Convenience is appealing, but it should never come at the expense of our safety. AI is not a substitute for licensed medical care, nor should  it be. So next time you ask ChatGPT for a medical answer, remember — your health is too important to leave it to an algorithm.

Goran Narancic’27 can be reached at goran_narancic@brown.edu. Please send responses to this column to letters@browndailyherald.com and other columns to opinions@browndailyherald.com.

ADVERTISEMENT


Powered by SNworks Solutions by The State News
All Content © 2025 The Brown Daily Herald, Inc.