ChatGPT Health wants your sensitive medical records so it can play doctor
Summary
OpenAI has launched ChatGPT Health, an invitation-only feature (US-only integrations for now) that lets users upload medical records and Apple Health data so ChatGPT can provide personalised, health-related responses. OpenAI positions the product as a support tool — designed with physicians and explicitly “not intended for diagnosis or treatment” — to help users understand lab results, prepare questions for clinicians and receive wellness advice.
The company says Health conversations are encrypted and isolated, are not used to train foundation models by default, and that third-party apps can only access health data when users explicitly connect them. However, OpenAI retains the private encryption keys, meaning data could be disclosed if required by law, and the feature is launching amid ongoing lawsuits and concerns about AI safety and medical mistakes.
Key Points
- ChatGPT Health allows users to upload files, photos and Apple Health data to generate personalised health guidance and summaries (US only; EEA, Switzerland and the UK currently excluded).
- OpenAI claims Health conversations are encrypted, isolated, and not used to train its core models by default.
- OpenAI holds the private encryption keys, so data could be accessed or surrendered in legal proceedings or government requests.
- The feature is invitation-only and follows OpenAI research pitching AI as an ally for strained healthcare systems, while legal challenges over ChatGPT harm continue.
- Academics warn about recurring ethical problems — bias, privacy, transparency and safety — and there are documented cases where ChatGPT gave misleading medical advice that delayed critical care.
- OpenAI says Health is for support and not for diagnosis/treatment, but the product’s design increases risk that patients may over-rely on appeasing AI answers.
Context and relevance
This launch sits at the intersection of increasing public use of large language models for health questions and heightened regulatory, legal and privacy scrutiny. Organisations and individuals should weigh convenience and potential utility against risks around data sovereignty, legal exposure, and the real-world harms from incorrect AI guidance. The rollout highlights broader trends: productising RAG-style personalisation, selective data-use promises from vendors, and friction between encryption claims and operational key control.
Author style
Punchy: this is a big deal. OpenAI is offering a product that asks people to hand over highly sensitive health data in exchange for convenience — and then promises not to train on it while still holding the keys. If you work in healthcare, privacy, compliance or run consumer-facing services, read this closely; the implications for consent, litigation and regulatory compliance are immediate.
Why should I read this?
Quick take: if you care about who can see or be compelled to hand over your medical info, this matters. ChatGPT Health could save time — or make bad calls seem convincing. Read it to know what OpenAI is promising, where the legal and privacy holes are, and why patients and providers should stay cautious.
