, December 06, 2024

0 results found in this keyword

Could AI Help Curb Conspiracy Theory Beliefs?


  •   7 min reads
Could AI Help Curb Conspiracy Theory Beliefs?
Image by Alexandra_Koch from Pixabay

By Claudia López Lloreda

In September, a new study came out with a promising finding: An AI chatbot, nicknamed the “DebunkBot,” had successfully nudged people away from believing in conspiracy theories. The effects were large compared to other interventions — individuals’ confidence in a conspiracy was reduced by about 20 percent on average — and the research was among the first to use artificial intelligence to reduce conspiracy beliefs, which can be notoriously resistant to outside intervention.

The publication was well-timed: Worries around misinformation and disinformation — the unintended or intentional spreading of false information — have dominated the election cycle, as fake content about presidential nominees and other polarizing issues gets passed around on social media. The DebunkBot hasn’t been alone in its attempts to combat such falsehoods. Social scientists and psychologists have tried to develop interventions, particularly for conspiracy theories, but many so far have had practical limitations and meager effects.

“It has always been very difficult to have this change of opinion, change of mind in the participants,” said Gianluca Demartini, a data scientist who studies misinformation at the University of Queensland who was not involved with the study. “So this seems to be, so far, the strongest intervention that has an impact in addressing conspiracy theorists.”

However, the DebunkBot has to prove itself further, experts say. Researchers are still untangling why exactly it seems to work better than other attempts. Meanwhile, in general, using AI as a means of persuasion raises concerns, particularly about possible misuse by individuals with ill intentions.

Careful research on societal impact and human oversight remains crucial in using it to combat disinformation effectively without it backfiring, said Federico Germani, a disinformation researcher at the University of Zurich: “We need to understand it deeply.”


As artificial intelligence has advanced in recent years, experts have become increasingly concerned about the associated spread of misinformation. Some researchers, though, have proposed the technology as a tool to detect and tackle fake news. Many of them point to chatbots built on large language models — which are trained on vast amounts of data — as a way to provide individuals with factual information. “AI has a great potential to do that,” said Germani.

https://www.vpnsrus.com/, CC BY 2.0 via Wikimedia Commons

Several recent studies have found promising results: One study co-authored by Germani last year found that AI models like GPT-3 can communicate accurate information that’s easy to understand, often better than humans. Another research study in Brazil, also published last year, deployed a chatbot named CoronaAI to disseminate Covid-19 facts through WhatsApp, and found that users asked the bot to verify potential fake news, among other things. And in a recently published study, ChatGPT addressed myths around Alzheimer’s disease in an accurate way that satisfied physicians.

"This seems to be, so far, the strongest intervention that has an impact in addressing conspiracy theorists.”

However, while such research has shown promise addressing general misinformation, conspiracy theories, which state that events are the results of secret plots, tend to be more persistent and tougher to overcome. One idea is that such beliefs are strongly held because they fulfill psychological needs, such as providing explanations for complex or distressing events, giving people a sense of understanding and control in uncertain situations, or providing a sense of community.

Concerned by the lack of effective interventions, some researchers turned to technology to solve the problem. One 2022 study started to give a glimpse into AI’s potential: A team compared how individuals’ beliefs changed after receiving counter-conspiracy information about climate change, either through news articles or through chatbots with different levels of empathy. The empathetic bots were more effective at reducing conspiracy beliefs than a news article, but only for the participants who were comfortable with uncertainty. The researchers did a similar comparison for information on Covid-19, but found the news article to be more effective.

Now, the DebunkBot study reinforces the idea that AI can in fact help move the needle on conspiratorial beliefs for up to two months after the intervention: Researchers from MIT, Cornell University, and American University used the latest version of the popular large language model ChatGPT-4 to try to break down these strongly held ideas: First, individuals reported their belief in a particular conspiracy theory, for example, the idea that 9/11 was a planned act by the U.S. government, and wrote up a brief blurb explaining the reasons why they believed it. The bot then summarized the theory and the participants were asked to rate their belief in it on a scale of 0 to 100.

Prompted with the instruction to debunk the theory, the bot had three rounds of back-and-forth with the individuals, in which it provided facts on why the participant’s original statements were inaccurate while the control group chatted with the bot about other topics. After their conversation, individuals were then asked again to report how much they believed in the conspiracy on a scale from 1 to 100.

Individuals who chatted with the bot about a conspiracy theory rated their confidence in it about 20 percent less after the intervention, an effect that surpasses those seen before in intervention studies, while those that chatted about another unrelated subject remained steady in their convictions. The DebunkBot also decreased their confidence in other conspiracy theories (gauged by a survey that asks about 15 different conspiracies), beyond the one that the bot and individual chatted about, a sort of spillover effect.

These large effects “are not something you see in the conspiracy theory literature,” said Javier Granados Samayoa, a social psychologist at the University of Pennsylvania. The researchers are “the first ones to show these large effects on conspiracy beliefs,” he added.

Researchers have varying ideas as to why that might be. Conspiracy theories differ from individual to individual, said Thomas Costello, a psychologist at American University and the lead author of the study. “They're kind of like snowflakes, where no two belief systems are alike.” But the bot, with vast amounts of data at its hands, can personalize its response and provide tailored facts to individuals in a way that previous interventions couldn’t. “When you look inside at what a person really believes, it's pretty diverse, and that that wasn't accounted for by the interventions and treatments that we were using,” he added.

While research has shown promise addressing general misinformation, conspiracy theories tend to be more persistent and tougher to overcome.

To Costello, the DebunkBot’s success also says something about the underlying psychology of conspiracy believers. "Human beings are really pretty rational, with some exceptions, and if you provide them with information that speaks to exactly their position, they'll change their mind."

 CC0 Public Domain

But some scholars, such as Germani, suspect that that there is something more to it. “Probably it is not about providing factual information, per se, but it's about how this information is conveyed to the participants in the study."

The participants also reported increased intentions to ignore or unfollow social media accounts espousing the focal conspiracy. And when researchers returned to the participants two months later, the changes had persisted. This is just beginning to reveal the impact of the effect, researchers say, since it remains to be seen whether the effect lasts longer that two months and if it actually has an impact on behavior.

Further studies may inform the underlying reasons why people believe such ideas coming from a bot — is it because a chatbot seems more neutral and objective, for example — and improve other interventions, researchers say. By tweaking different aspects of the model, such as the length of the bot response or how friendly it is, researchers can start to understand the factors that underlie individuals’ positions and ultimately integrate that into real world interventions.


Costello and others envision that, in the future, the DebunkBot — or a similar intervention — could be integrated into online search or social media platforms, so that when someone searches for information on conspiracy theories online, they’ll be prompted to interact with the AI. But while using such a tool to combat misinformation could have many practical benefits — like scalability — it also has limitations.

For example, in California, the Department of Public Health, in collaboration with Meta, released a chatbot to offer reliable information about Covid-19 in both English and Spanish through WhatsApp. But such an intervention typically requires access to smartphone technology, which could exacerbate existing inequalities in information access and belief change efforts, said Lotte Pummerer, a social psychologist at the University of Bremen, in Germany.

Additionally, convincing die-hard conspiracy theorists might be much harder. In the DebunkBot study, individuals who stated that their belief was important to their understanding of the world reported a smaller reduction in confidence after the intervention. And the effectiveness of an intervention relies heavily on the participants' willingness to interact with the AI, said Demartini. Individuals need to engage with the bot for it to be able to do its work.

The bot, with vast amounts of data at its hands, can personalize its response and provide tailored facts to individuals in a way that previous interventions couldn’t.

Meanwhile, it remains to be seen whether this works for other types of misinformation, particularly that surrounding political issues. “I think it will be harder when the topic falls along partisan lines and is polarized,” Costello said.

“For something that starts to involve values or is aligned with the broader sense of the world,” he continued, “I would be less optimistic, but it's worth trying out.” Currently, Costello is now running experiments using DebunkBot beyond conspiracies to other unfounded suspicions, such as belief in ghosts.

Others are concerned that AI’s powers of persuasion might backfire: In the hands of the wrong people, chatbots could be used to convince people of conspiracy theories or strengthen their beliefs. The 2023 study that found GPT-3 can often communicate accurate information better than humans, for example, also found the chatbot could produce persuasive disinformation. And as others’ work has shown, AI is already able to generate convincing false text, images, and videos, including fake election photos: “You could envision adversarial agents using this type of model to have conversations, to convince people of anything else, even of conspiracy theories or to vote for someone at the next elections,” said Demartini.

"We need to reckon with AI persuasion as a thing that might have a real impact on what people think.”

So while AI might serve as a helpful tool to understand human psychology and potentially address problems of mis- and disinformation such as conspiracy beliefs, experts agree that the field still has to evaluate what deploying such technology in the real world might mean.

“This is a good use case,” said Costello. “But also, I think we need to reckon with AI persuasion as a thing that might have a real impact on what people think.”

This article was originally published on Undark. Read the original article here.


Related Posts

You've successfully subscribed to Our Brew
Great! Next, complete checkout for full access to Our Brew
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
Your link has expired.