Document Type

Article

Publication Title

Nurse Educator

Abstract

Background:

ChatGPT, an artificial intelligence (AI) text generator trained to predict correct words, can provide answers to questions but has shown mixed results in answering medical questions.

Purpose:

To assess the reliability and accuracy of ChatGPT in providing answers to a complex clinical question.

Methods:

A Population, Intervention, Comparison, Outcome, and Time (PICOT) formatted question was queried, along with a request for references. Full-text articles were reviewed to verify the accuracy of the evidence summary provided by the chatbot.

Results:

ChatGPT was unable to provide a certifiable response to a PICOT question. The references cited as evidence included incorrect journal information, and many study details summarized by ChatGPT proved to be patently false, including providing fabricated data.

Conclusions:

ChatGPT provides answers that appear legitimate but may be factually incorrect. The system is not transparent in how it gathers data to answer questions and sometimes fabricates information that looks plausible, making it an unreliable tool for clinical questions.

html

DOI

https://doi.org/10.1097/NNE.0000000000001436

Publication Date

4-28-2023

Keywords

artificial intelligence, ChatGPT, information literacy, information storage and retrieval, machine learning, natural language processing

Disciplines

Library and Information Science | Nursing

Comments

This is a non-final version of an article published in final form in:

Branum, Candise MLS; Schiavenato, Martin PhD, RN. Can ChatGPT Accurately Answer a PICOT Question?: Assessing AI Response to a Clinical Question. Nurse Educator ():10.1097/NNE.0000000000001436, April 28, 2023. | DOI: 10.1097/NNE.0000000000001436

ISSN

0363-3524

Upload File

wf_yes

Branum_Can ChatGPT_Accessible Tables.docx (25 kB)
Tables in Accessible Format (.docx)

Share

COinS