New research from the University of Kansas Longevity Institute reveals key vulnerabilities to misinformation generated by artificial intelligence, and potential models to counter it.
The study, published in the Journal of Pediatric Psychology, found that parents seeking medical information for their children trust AI more than medical professionals when the author is unknown, and parents also trust AI-generated text. It became clear that the respondents evaluated them as being sexual, moral, and trustworthy.
“When we started this study, it was shortly after ChatGPT was first launched. We were concerned about parents using this new and easy way to collect their children’s health information. ,” said lead author Carissa Leslie Miller, a KU doctoral student. Clinical child psychology. “Parents often turn to the internet for advice, so we wanted to understand what happens when you use ChatGPT and what you need to worry about.”
Leslie Miller and colleagues conducted a cross-sectional study in which 116 parents, ages 18 to 65, were given health-related texts such as information about infant sleep training and nutrition. They reviewed content generated by both ChatGPT and medical professionals, but participants were blinded to the authors.
“Participants rated texts based on perceived morality, trustworthiness, expertise, accuracy, and how likely they were to trust the information,” Leslie Miller said.
According to KU researchers, in many cases parents were unable to differentiate between content generated by ChatGPT and content generated by experts. Where there was a significant difference in ratings, ChatGPT was rated as more reliable, accurate, and trustworthy than the expert-authored content.
“This result was a surprise to us, especially since this study was conducted early in ChatGPT’s availability,” said Leslie-Miller. “We are beginning to realize that AI is becoming integrated in ways that are not immediately obvious, and people may not even know when they are reading AI-generated text and expert content. yeah.”
Leslie Miller said the findings raise concerns because responses that appear to come from an app or the internet are also powered by generative AI, but are actually conversations with the AI.
“During the study, early iterations of the AI output contained incorrect information,” she said. “This is alarming because, as we know, AI tools like ChatGPT are prone to ‘hallucinations’, errors that occur when the system lacks sufficient context. It’s from. ”
While ChatGPT works well in many cases, the AI model is not expert and can generate incorrect information, Leslie Miller said.
“It is critical that we address this issue because it can have serious implications for children’s health,” she said. “We are concerned that people will come to rely on AI for health advice without proper professional supervision.”
Leslie Miller’s co-authors were Stacey Simon of Children’s Hospital Colorado and the University of Colorado School of Medicine in Aurora, Colorado. Kelsey Dean of the Center for Healthy Lifestyles and Nutrition at Children’s Mercy Hospital in Kansas City, Missouri. Dr. Nadine Mokarati of Alta Sciences Clinical in Overland Park, Kansas; Christopher Cushing is an associate professor of clinical child psychology at KU and an associate scientist at the Lifespan Institute.
“Results show that rapidly designed ChatGPT can influence behavioral intentions for medication, sleep, and eating decisions,” the authors report.
Leslie Miller said the life-or-death importance of children’s health information contributes to the problem, but the AI it generates can be wrong and users need the expertise to identify inaccuracies. He said the possibilities range from all topics.
She suggested that consumers of AI information need to be cautious and only rely on information that is consistent with the expertise available from non-generative AI sources.
“There are still disparities in the credibility of sources,” she says. “Look for AI integrated into systems with layers of double-checked expertise. Wikipedia is not always verified, so we are constantly told to be careful when using it. The same goes for AI: not all are equal, so choose the platform you’re most likely to trust.
In fact, Leslie Miller said AI could be a boon for parents looking for health information, as long as they understand the need to also consult a medical professional.
“We believe that AI has great potential to be leveraged. Specifically, it is possible to generate much more information than was previously possible,” she said. “However, it is important to recognize that AI is not an expert and the information it provides is not provided by an expert.”