AI Could Present Political Peril for 2024 With Threat to Mislead Voters

AI tools that are becoming increasingly sophisticated can now create hyper-realistic images and cloned human voices — audio and video in seconds for minimal cost. When used with powerful social media algorithms, digitally created, fake content can spread quickly and far to target particular audiences, potentially taking dirty campaign tricks to a whole new low.

Generative AI can quickly produce targeted texts, videos, and campaign emails; it can also be used to impersonate candidates and mislead voters on a massive scale and at high speed. As a result, the implications for 2024 campaigns and elections are vast and troubling.

“We’re not prepared for this,” said vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale and distribute it on social platforms, well, it’s going to have a major impact.”

Experts on AI can quickly list several alarming scenarios where generative AI creates fake media to slander candidates, incite violence, or confuse voters.

Possibilities for the use of AI included audio recordings of a candidate expressing racist views or confessing to the commitment of a crime; fake images designed to look like news reports and falsely claiming a candidate had withdrawn from a race; automated robocall messages using a candidate’s voice, telling voters to cast ballots on the wrong day; and video footage of a candidate showing someone give an interview or speech they never delivered.

“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said founding CEO of the Allen Institute for AI, Oren Etzioni, “A lot of people would listen. But it’s not him.”

Warning: AI could be used to interfere with democracy and erode public trust

Petro Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company, predicted a group looking to interfere with U.S. democracy would use AI and synthetic media to erode trust.

“What happens if an international entity — a cybercriminal or a nation-state — impersonates someone? What is the impact? Do we have any recourse?” asked Stoyanov. “We’re going to see a lot more misinformation from international sources.”

AI-generated political disinformation has already gone viral online ahead of the 2024 election. One such incident was when AI-generated images showing Trump’s mug shot fooled some social media users even though the former president didn’t take one when he was booked and then arraigned in Manhattan for allegedly falsifying business records. Other AI-created images showed Trump resisting arrest, although the incident didn’t occur.

Legislation requiring candidates to label campaign advertisements created using AI was introduced in the House by Democrat Representative Yvette Clarke of New York, who has also sponsored legislation requiring anyone creating synthetic images to watermark them, indicating they are AI-created.

Some states have also offered their own proposals addressing concerns about deepfakes.

Rep. Clarke said her greatest fear is that AI could be used before the 2024 election to create audio and video that incites violence and pits Americans against each other.