AI-simulated clinical consultations: Assessing the potential of ChatGPT to support medical training.
No Thumbnail Available
All Authors
Saggar, A.
Dimitrova, V.
Sarikaya, D.
Hogg, D.
Darling, JC.
LTHT Author
Darling, Jonathan
LTHT Department
Leeds Children's Hospital
Children's Services
Children's Services
Non Medic
Publication Date
2026
Item Type
Journal Article
Language
Subject
ARTIFICIAL INTELLIGENCE , PAEDIATRICS , EDUCATION, MEDICAL
Subject Headings
Abstract
BACKGROUND: Simulated medical scenarios are useful for evaluating and developing clinical competencies but scheduling them is expensive and time-consuming. Large language models show promise in role-playing tasks. We investigated the fidelity with which ChatGPT can mimic patients, clinicians and examiners in educational settings.
OBJECTIVE: To determine the realism with which ChatGPT can portray patient, doctor and examiner roles, and the utility of these agents in clinical education.
METHOD: We selected four paediatric scenarios from mock objective structured clinical examinations (OSCEs) and set up separate patient, doctor and examiner ChatGPT agents for each. The patient and doctor agents conversed with each other in written format. The examiner agent marked the doctor agent based on this conversation. Patients and clinicians familiar with the OSCE assessed the dialogues.
RESULTS: The patient agent was judged to be true to character most of the time and good at expressing emotion. The doctor agent was reported to be an effective communicator but occasionally used jargon. Both agents tended to produce repetitive responses which undermined realism. The examiner agent had good correlation with human clinicians. There was moderate support for using the simulated interactions for educational purposes.
CONCLUSION: Although the realism of the agents can be improved, ChatGPT can generate plausible proxies of participants in medical scenarios and could be useful for complementing standardised patient-based training.
Journal
Archives of Disease in Childhood