OpenAI launches ChatGPT for healthcare

OpenAI Chaptgpt for Healthcare

OpenAI has launched ChatGPT for healthcare and an updated API. A specialized set of tools for healthcare professionals. ChatGPT for healthcare aims to help medical organizations integrate AI while remaining compliant with HIPAA guidelines. The tool seeks to solve issues of administrative burden on doctors, fragmented medical data and rising demand for healthcare.

ChatGPT for healthcare is trained by OpenAI’s latest model, GPT-5.2. The models were refined using feedback from over 260 doctors and which totaled to more than 600,000 feedback points, in a bid to ensure that the healthcare suit would provide more accurate clinical information than the standard models.

The healthcare suite is based on evidence backed responses. Unlike standard models, the healthcare model prioritizes “transparent citations”, pulling information from millions of clinical journals so that doctors can verify their sources.

The healthcare suite also enables administrative automation. It includes template for tedious tasks such as writing discharge summaries, clinical letters and prior authorization requests.

The healthcare suite has privacy and control built into it. OpenAI provides a business associate agreement for HIPAA compliance. Data shared within the healthcare suite is not used to train OpenAI’s standard models.

OpenAI highlighted that its GPT5.2 model outperforms previous models in clinical benchmark tests such as HealthBench and GDPval. The GPT5.2 model has also outperformed human baselines. Early adopters using this tool include Mayo Clinic, Cedars-Sinai, UCSF, Abridge and Moderna.

OpenAI’s healthcare suit aims to modernize medicine but many ethical questions still linger. AI is not 100% accurate and can provide hallucinations by generating medical citations that do not exist at all. This will diminish clinical acumen and result in misdiagnosis.

Another concern is underrepresentation. AI is trained on past data and if a particular demographic is not adequately represented in the data, this could lead to marginalization of minorities. OpenAI’s model is trained on western medical standards and this may not apply to societies who have different medical modalities and cultural health beliefs. For instance, traditional chinese medicine may be an overlooked sector in OpenAI’s models.

Reliance on AI could lead to the erosion of clinical physicians critical thinking skills and clinical acumen. Patient trust would also be affected, if patients believe that a machine is diagnosing them, they may not trust the efficacy of the results and doctor-patient relationship is undermined.

AI can be a very useful tool. But in the realm of healthcare, they are many facets that have to be addressed. Guardrails must be put in place to ensure patient safety and accurate diagnosis. Will you trust a doctor using AI to treat you?

Leave a Reply

Your email address will not be published. Required fields are marked *