Shortly after generative artificial intelligence hit the mainstream, researchers warned that chatbots would create a dire problem: As disinformation became easier to create, conspiracy theories would spread rampantly.
Now, researchers wonder if chatbots might also offer a solution. DebunkBot, an AI chatbot designed to “very effectively persuade” users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people’s convictions, according to a study published Thursday in journal Science.The new findings challenge the widely held belief that facts and logic cannot combat conspiracy theories. The DebunkBot, built on the technology that underlies ChatGPT, may offer a practical way to channel facts.
Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of explaining would pull her out.
The theory was that people adopt conspiracy theories to sate an underlying need to explain and control their environment, said Thomas Costello, co-author of the study and an assistant professor of psychology.
But Costello and his colleagues wondered whether there might be another explanation: What if debunking attempts haven’t been personalised enough? Since conspiracy theories vary from person to person – and each person may cite different evidence to support one’s ideas – perhaps a one-size-fits-all debunking script isn’t the best strategy. A chatbot that can counter each person’s conspiratorial claim with troves of information might be much more effective, they thought.
To test that hypothesis, they recruited over 2,000 adults, asked them to elaborate on a conspiracy they believed in, and rate how much they believed it on a scale from zero to 100. Then, some participants had a brief discussion with the chatbot.
One participant, for example, believed the 9/11 terrorist attacks were an “inside job” because jet fuel couldn’t have burned hot enough to melt the steel beams of World Trade Center. The chatbot responded: “It is a common misconception that the steel needed to melt for the towers to collapse,” it wrote. “Steel starts to lose strength and becomes more pliable at temperatures much lower than its melting point, which is around 2,500 degrees Fahrenheit.”
After three exchanges, which lasted eight minutes on average, participants rated how they felt about their beliefs again. On average, the ratings dropped by about 20%; about one-fourth of participants no longer believed the falsehood.
The authors are exploring how they can re-create this effect in the real world. They have considered linking the chatbot in forums where these beliefs are shared, or buying ads that pop up when someone searches for a common theory. nyt
Now, researchers wonder if chatbots might also offer a solution. DebunkBot, an AI chatbot designed to “very effectively persuade” users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people’s convictions, according to a study published Thursday in journal Science.The new findings challenge the widely held belief that facts and logic cannot combat conspiracy theories. The DebunkBot, built on the technology that underlies ChatGPT, may offer a practical way to channel facts.
Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of explaining would pull her out.
The theory was that people adopt conspiracy theories to sate an underlying need to explain and control their environment, said Thomas Costello, co-author of the study and an assistant professor of psychology.
But Costello and his colleagues wondered whether there might be another explanation: What if debunking attempts haven’t been personalised enough? Since conspiracy theories vary from person to person – and each person may cite different evidence to support one’s ideas – perhaps a one-size-fits-all debunking script isn’t the best strategy. A chatbot that can counter each person’s conspiratorial claim with troves of information might be much more effective, they thought.
To test that hypothesis, they recruited over 2,000 adults, asked them to elaborate on a conspiracy they believed in, and rate how much they believed it on a scale from zero to 100. Then, some participants had a brief discussion with the chatbot.
One participant, for example, believed the 9/11 terrorist attacks were an “inside job” because jet fuel couldn’t have burned hot enough to melt the steel beams of World Trade Center. The chatbot responded: “It is a common misconception that the steel needed to melt for the towers to collapse,” it wrote. “Steel starts to lose strength and becomes more pliable at temperatures much lower than its melting point, which is around 2,500 degrees Fahrenheit.”
After three exchanges, which lasted eight minutes on average, participants rated how they felt about their beliefs again. On average, the ratings dropped by about 20%; about one-fourth of participants no longer believed the falsehood.
The authors are exploring how they can re-create this effect in the real world. They have considered linking the chatbot in forums where these beliefs are shared, or buying ads that pop up when someone searches for a common theory. nyt
Source : Times of India