Translating the machine; An assessment of clinician understanding of ophthalmological artificial intelligence outputs.
No Thumbnail Available
All Authors
Wysocki, O.
Mak, S.
Frost, H.
Graham, DM.
Landers, D.
Aslam, T.
LTHT Author
Mak, Sammie
LTHT Department
Doctors' Rotation
Ophthalmology
Ophthalmology
Non Medic
Publication Date
2025
Item Type
Journal Article
Language
Subject
Subject Headings
Abstract
INTRODUCTION: Advances in artificial intelligence offer the promise of automated analysis of optical coherence tomography (OCT) scans to detect ocular complications from anticancer drug therapy. To explore how such AI outputs are interpreted in clinical settings, we conducted a survey-based interview study with 27 clinicians -comprising 10 ophthalmic specialists, 10 ophthalmic practitioners, and 7 oncologists. Participants were first introduced to core AI concepts and realistic clinical scenarios, then asked to assess AI-generated OCT analyses using standardized Likert-scale questions, allowing us to gauge their understanding, trust, and readiness to integrate AI into practice.
METHODS: We developed a questionnaire through literature review and consultations with ophthalmologists, computer scientists, and AI researchers. A single investigator interviewed 27 clinicians across three specialties and transcribed their responses. Data were summarized as medians (ranges) and compared with Mann-Whitney U tests (alpha = 0.05).
RESULTS: We noted important differences in the impact of various explainability methods on trust, depending on the clinical or AI scenario nature and the staff expertise. Explanations of AI outputs increased trust in the AI algorithm when outputs simply reflected ground truth expert opinion. When clinical scenarios were complex with incorrect AI outcomes, a mixed response to explainability led to correctly reduced trust in experienced clinicians but mixed feedback amongst less experienced clinicians. All clinicians had a general consensus on lack of current knowledge in interacting with AI and desire more training.
CONCLUSIONS: Clinicians' trust in AI algorithms are affected by explainability methods and factors, including AI's performance, personal judgments and clinical experience. The development of clinical AI systems should consider the above and these responses ideally be factored into real-world assessments. Use of this study's findings could help improve the real world validity of medical AI systems by enhancing the human-computer interactions, with preferred explainability techniques tailored to specific situations.
Journal
International Journal of Medical Informatics