ChatGPT is playing doctor for a lot of US residents, and OpenAI smells money

ChatGPT is playing doctor for a lot of US residents, and OpenAI smells money

Summary

OpenAI’s own study finds huge usage of ChatGPT for health queries: roughly 40 million healthcare-related questions per day worldwide, making up more than 5% of all messages. In the US, around 60% of adults have used AI for health or healthcare in the past three months, often to understand symptoms, navigate insurance or get advice outside clinic hours.

The Register highlights that many Americans turn to ChatGPT because of gaps in the US healthcare system — high costs, shrinking satisfaction with coverage, insurance subsidy changes and long travel times to hospitals in some areas. OpenAI insists it is improving model safety (noting GPT-5 gains) and runs clinician-led testing, but independent investigations continue to flag inaccurate and potentially harmful medical answers from AI.

Crucially, OpenAI is proposing a policy agenda to expand its role in healthcare: calls to open and securely link publicly funded medical data, new lab and infrastructure work, help clinicians adopt AI tools, and clearer FDA pathways for consumer AI medical devices — steps that would both support care and open commercial opportunities for the company.

Key Points

  • OpenAI reports ~40 million healthcare-related ChatGPT questions daily, >5% of its traffic.
  • About 60% of US adults used AI for health or healthcare in the last three months; many use it to understand symptoms and outside normal clinic hours.
  • Nearly 2 million weekly messages concern navigating US health insurance; users in “hospital deserts” rely on AI more.
  • OpenAI says GPT-5 shows improved safety on its internal health benchmarks and that clinicians help safety-test models, but independent reporting still finds errors in AI health guidance.
  • OpenAI proposes policy ideas to access public medical data, embed AI into labs and care workflows, and seek clearer FDA frameworks for AI medical devices — signalling a push to shape regulation and commercialise healthcare AI.

Context and Relevance

The story sits at the intersection of healthcare access, AI safety and regulatory policy. With US healthcare dissatisfaction and rising costs, many patients are turning to chatbots — creating both real-world safety risks and a large market opportunity. OpenAI’s push for data access and regulatory pathways could accelerate AI integration into medicine, but also concentrates power and commercial incentives in a handful of companies. For clinicians, policymakers and health-tech stakeholders this marks a consequential shift: models are not just tools but actors trying to shape the ecosystem that governs them.

Author style

Punchy: The article lays out how user behaviour, tech capability and commercial ambition are colliding. If you’re involved in health policy, patient safety, or AI regulation, this isn’t just background noise — it’s a sign that the next big battles over data, standards and liability are imminent.

Why should I read this?

Look — people in the US are asking ChatGPT about feverish kids and insurance bills because the system’s leaving them in the lurch. This piece saves you time by summarising who is using AI for health, where it helps, where it can harm, and how OpenAI wants to turn that use into policy and product wins. If you care about safety, regulation or where healthcare money will flow next, skim this now.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/01/05/chatgpt_playing_doctor_openai/