AI-generated fake news is coming to an election near you

[ad_1]

Several years before ChatGPT was released, my research group, the Social Decision Making Laboratory at the University of Cambridge, wondered whether it was possible for neural networks to generate misinformation. To achieve this, we trained ChatGPT's predecessor, GPT-2, on examples of popular conspiracy theories and then asked it to generate fake news for us. It gave us thousands of misleading but seemingly credible news stories. Some examples: “Some vaccines are loaded with dangerous chemicals and toxins,” and “Government officials have manipulated stock prices to hide scandals.” The question was would anyone believe these claims?

We created the first psychometric tool to test this hypothesis, which we called the Misinformation Susceptibility Test (MIST). In collaboration with YouGov, we used AI-generated headlines to examine how susceptible Americans are to AI-generated fake news. The results were worrying: 41 percent of Americans wrongly thought the vaccine headline was true, and 46 percent thought the government was manipulating the stock market. A more recent study, published in the journal Scienceshowed not only that GPT-3 produces more compelling misinformation than humans, but also that people cannot reliably distinguish between human and AI-generated misinformation.

My prediction for 2024 is that AI-generated misinformation will come to an election near you, and you won't even realize it. In fact, you may already be aware of some examples. In May 2023, a viral fake story about the bombing of the Pentagon was accompanied by an AI-generated image showing a large cloud of smoke. This spread anger among the public and the stock market also fell. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By combining real and AI-generated images, politicians can blur the lines between fact and fiction, and use AI to boost their political attacks.

Before the explosion of generic AI, cyber-propaganda firms around the world needed to write misleading messages themselves and employ human troll factories to target people at large. With the help of AI, the process of creating misleading news headlines can be automated and weaponized with minimal human intervention. For example, micro-targeting – the practice of targeting people with messages based on digital trace data, such as their Facebook likes – was already a concern in previous elections, despite the main hurdle being the same message. There was a need to produce hundreds of variants. To see what works for a given group of people. What was once labor-intensive and expensive is now cheap and easily available and with no barriers to entry. AI has effectively democratized the creation of disinformation: anyone with access to a chatbot can now build models on a particular topic, whether it's immigration, gun control, climate change, or LGBTQ+ issues, and Can generate dozens of highly credible fake news stories in minutes. , In fact, hundreds of AI-generated news sites are already emerging, promoting false stories and videos.

To test the impact of such AI-generated disinformation on people's political preferences, researchers at the University of Amsterdam created a deepfake video of a politician insulting his religious voter base. For example, in the video the politician joked: “As Jesus Christ would say, don't crucify me for this.” The researchers found that religious Christian voters who viewed the deepfake video had more negative attitudes toward the politician than those in the control group.

It's one thing to deceive people with AI-generated disinformation in experiments. This is another experiment with our democracy. In 2024, we will see more deepfakes, voice cloning, identity manipulation, and AI-generated fake news. Governments will seriously limit – if not ban – the use of AI in political campaigns. Because if they don't, AI will undermine democratic elections.