Background and purpose: Large language models (LLMs) such as ChatGPT-4 have shown potential for medical decision support, but their reliability in specialized fields remains uncertain. This study aimed to evaluate ChatGPT-4’s performance as a clinical decision support tool in neuro-oncology radiotherapy by comparing its treatment recommendations for patients with central nervous system tumors against a multidisciplinary tumor board’s decisions, an independent specialist’s opinion, and published guidelines. Materials and methods: We prospectively collected 101 neuro-oncology cases (May 2024–May 2025) presented at a tertiary-care tumor board. Key case details were entered into ChatGPT-4 with a standardized query asking whether to recommend radiotherapy and, if so, the target volumes and dose. The AI’s recommendations were recorded and compared to the tumor board’s consensus, a blinded radiation oncologist’s recommendation, and ESMO guideline indications when applicable. Concordance rates (percentage agreement) and Cohen’s kappa were calculated. Sensitivity and specificity were assessed using the reference decisions as ground truth. McNemar’s test was used to evaluate any bias in discordant recommendations. Results: ChatGPT-4 matched the tumor board’s radiotherapy recommendations in 76% of cases (κ = 0.61). Agreement with the independent specialist was 79% (κ = 0.58). In 61 low-complexity cases with clear guidelines, ChatGPT-4 concurred with guideline-based indications in 76.7% of cases, missing some recommended treatments (sensitivity 73%, specificity 100%). In intermediate-complexity scenarios, concordance with the tumor board was 70.8%, with most discrepancies due to the AI recommending treatment that experts did not (sensitivity 85.7%, specificity 64.7%). In high-complexity cases, agreement was 90.9% (sensitivity 100%, specificity 83.3%). Overall, ChatGPT-4 showed an overtreatment bias, more often recommending radiotherapy when the human experts chose observation (p < 0.05 for AI vs. tumor board discordances). Its overall agreement (76%) was lower than that of the human specialist (90%). Conclusion: ChatGPT-4 can reproduce many expert radiotherapy decisions in neuro-oncology, reflecting substantial absorption of standard clinical practice. However, it cannot substitute for human judgment: the AI omitted some indicated treatments in straightforward cases and suggested unnecessary therapy in some borderline cases, indicating a lack of nuanced clinical reasoning. Careful human oversight is essential if such models are to be used for clinical decision support.

Tini, P., Novi, F., Donnini, F., Perrella, A., Bagnacci, G., Mazzei, M.A., et al. (2025). Assessing ChatGPT-4 as a clinical decision support tool in neuro-oncology radiotherapy: a prospective comparative study. JOURNAL OF NEURO-ONCOLOGY, 176(1) [10.1007/s11060-025-05254-z].

Assessing ChatGPT-4 as a clinical decision support tool in neuro-oncology radiotherapy: a prospective comparative study

Tini, Paolo
;
Novi, Federica;Donnini, Flavio;Perrella, Armando;Bagnacci, Giulio;Mazzei, Maria Antonietta;Minniti, Giuseppe
2025-01-01

Abstract

Background and purpose: Large language models (LLMs) such as ChatGPT-4 have shown potential for medical decision support, but their reliability in specialized fields remains uncertain. This study aimed to evaluate ChatGPT-4’s performance as a clinical decision support tool in neuro-oncology radiotherapy by comparing its treatment recommendations for patients with central nervous system tumors against a multidisciplinary tumor board’s decisions, an independent specialist’s opinion, and published guidelines. Materials and methods: We prospectively collected 101 neuro-oncology cases (May 2024–May 2025) presented at a tertiary-care tumor board. Key case details were entered into ChatGPT-4 with a standardized query asking whether to recommend radiotherapy and, if so, the target volumes and dose. The AI’s recommendations were recorded and compared to the tumor board’s consensus, a blinded radiation oncologist’s recommendation, and ESMO guideline indications when applicable. Concordance rates (percentage agreement) and Cohen’s kappa were calculated. Sensitivity and specificity were assessed using the reference decisions as ground truth. McNemar’s test was used to evaluate any bias in discordant recommendations. Results: ChatGPT-4 matched the tumor board’s radiotherapy recommendations in 76% of cases (κ = 0.61). Agreement with the independent specialist was 79% (κ = 0.58). In 61 low-complexity cases with clear guidelines, ChatGPT-4 concurred with guideline-based indications in 76.7% of cases, missing some recommended treatments (sensitivity 73%, specificity 100%). In intermediate-complexity scenarios, concordance with the tumor board was 70.8%, with most discrepancies due to the AI recommending treatment that experts did not (sensitivity 85.7%, specificity 64.7%). In high-complexity cases, agreement was 90.9% (sensitivity 100%, specificity 83.3%). Overall, ChatGPT-4 showed an overtreatment bias, more often recommending radiotherapy when the human experts chose observation (p < 0.05 for AI vs. tumor board discordances). Its overall agreement (76%) was lower than that of the human specialist (90%). Conclusion: ChatGPT-4 can reproduce many expert radiotherapy decisions in neuro-oncology, reflecting substantial absorption of standard clinical practice. However, it cannot substitute for human judgment: the AI omitted some indicated treatments in straightforward cases and suggested unnecessary therapy in some borderline cases, indicating a lack of nuanced clinical reasoning. Careful human oversight is essential if such models are to be used for clinical decision support.
2025
Tini, P., Novi, F., Donnini, F., Perrella, A., Bagnacci, G., Mazzei, M.A., et al. (2025). Assessing ChatGPT-4 as a clinical decision support tool in neuro-oncology radiotherapy: a prospective comparative study. JOURNAL OF NEURO-ONCOLOGY, 176(1) [10.1007/s11060-025-05254-z].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1302395
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo