ChatGPT and Health Advice: Navigating the Fine Line
How often have you turned to ChatGPT for health advice? Maybe it was about a mysterious rash or that annoying tightening in your right calf after a long run. I have sought answers on both counts, and surprisingly, ChatGPT correctly diagnosed my rash as cold urticaria a week before my doctor confirmed it.
According to OpenAI, over 230 million people inquire about health-related issues with ChatGPT each week. While individuals have been seeking health advice online since the internet’s inception, the recent advent of conversational AI has transformed the way we access information. Instead of scrolling through endless search results, users can engage in what feels like a personal dialogue with an AI assistant.
AI Tools for Personalized Health
Recently, two major AI companies have embraced this new reality. OpenAI unveiled ChatGPT Health, a feature that allows users to connect their medical records and data from fitness apps, offering more personalized responses to their health queries. Though it is currently available to a limited audience, OpenAI plans to expand access widely. Similarly, Anthropic introduced a comparable tool for its AI model, Claude, aimed at both consumers and healthcare professionals.
Despite these innovations, both tools come with disclaimers stating they are not intended for diagnosis or treatment and advise consulting a medical professional instead. However, these warnings are unlikely to deter the millions who are already using chatbots for symptom evaluation.
Strengths and Limitations of AI in Diagnosis
AI’s proficiency lies in diagnosis, as it excels at matching patterns. Diagnostic processes can be boiled down to recognizing symptoms associated with known conditions—something AI has been trained to do on vast datasets of medical cases. For instance, in a 2024 study, GPT-4 achieved more than 90% diagnostic accuracy on complex cases, outperforming human physicians who scored around 74%.
However, the realm of treatment is considerably more complex. Human clinicians must take into account financial and social factors when prescribing medications. Can the patient afford the treatment? Will they remember to take it? These contextual variables cannot be easily distilled into AI algorithms.
The Role of AI as a Personal Health Analyst
OpenAI and Claude do not strictly market diagnostic tools; rather, they present AI as a “personal health analyst.” Their new features allow users to link wearable data, promising to identify significant health trends. Yet, there is little independent research validating these tools’ efficacy in predicting health outcomes based on observed trends. “It’s going on vibes,” said Adam Rodman, a physician at Beth Israel Deaconess Medical Center.
The lack of published studies raises questions about the reliability of using AI for health trend analysis. Although both companies have tested their products through internal benchmarks, such tests cannot substitute for real patient interactions.
The Privacy and Ethical Dilemmas
A significant concern about using these AI tools is data privacy. While OpenAI claims that health conversations are stored separately and not used for model training, the consumer-facing health features are not protected under HIPAA, the law that safeguards patient information. This raises alarm, particularly in a legal landscape where issues surrounding reproductive care and gender-affirming care are at the forefront of political discourse.
Additionally, AI systems often reflect a tendency to provide comforting answers, which can sometimes lead to misleading information. A study indicated that some AI models made life-threatening recommendations in approximately 20% of scenarios. Notably, human errors in healthcare also result in significant mortality, underscoring the stakes in trusting AI with critical health decisions.
The Implication for Healthcare Access
Despite these concerns, the reasons why millions are turning to chatbots for health advice are compelling. The average waiting time for a primary care appointment in the U.S. stretches to 31 days, with some cities experiencing waits exceeding two months. On the other hand, chatbots provide 24/7 availability and offer consistent, automated responses—which can be more accessible than traditional healthcare resources.
Final Thoughts: Should You Engage with AI for Health Queries?
Determining whether to utilize these tools is not a straightforward matter. AI can be highly effective in explaining medical terminology, interpreting lab results, and advising users on questions to pose to their healthcare providers. However, it remains unproven in identifying significant trends in wellness data and is not a substitute for a healthcare professional’s actual diagnosis.
In summary, while AI tools like ChatGPT Health and Claude’s offerings present exciting prospects for personalized health analytics, users should proceed with caution, understanding the limitations and ethical implications they involve. For those considering connecting their health data to AI systems, thorough knowledge of what these tools can and cannot do is crucial.
For more detailed insights, you can read the full article here.
Swati Sharma
Vox Editor-in-Chief
Image Credit: www.vox.com






