The threat to elections from artificial intelligence and so-called deepfakes always seemed a year or two away, but now it’s here.
WASHINGTON — Computer engineers and tech-savvy political scientists have warned for years that cheap, powerful artificial intelligence tools will soon allow anyone to create fake images, videos and audio realistic enough to fool voters and possibly sway elections.
The synthetic images that emerged were often raw, unconvincing and expensive to produce, especially when other types of disinformation were so cheap and easy to spread on social media. The threat from AI and the so-called deep fakes it always seemed like a year or two later.
Now can create complex generative artificial intelligence tools cloned human voices and hyperrealistic images, video and audio in seconds with minimal costs. When plugged into powerful social media algorithms, this fake and digitally created content can spread far and fast and target very specific audiences, potentially taking the company’s dirty tricks to a new low.
The implications for campaigns and the 2024 election are as big as they are troubling: Generative AI can not only quickly create targeted emails, text messages or videos for campaign campaigns, but can also be used to to mislead votersimpersonating candidates and subverting elections on an unprecedented scale and speed.
“We’re not ready for that,” warned AJ Nash, vice president of intelligence at cybersecurity firm ZeroFox. “For me, a big step forward is the appearance of audio and video capabilities. If you can do it on a large scale and spread it across social platforms, well, it’s going to have a big impact.”
AI experts could quickly lead to a series of disturbing scenarios in which generative artificial intelligence is used to create synthetic media to confuse voters, defame a candidate, or even incite violence.
Here are a few: automated automated messages in a candidate’s voice instructing voters to vote on the wrong day; audio recordings of a candidate allegedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or an interview they never gave. Fake images created to look like local news falsely claiming a candidate has dropped out of the race.
“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, who stepped down last year to found the nonprofit AI2. “A lot of people would listen. But it’s not him.”
Former President Donald Trump, who will run for office in 2024, shared the AI-generated content with his social media followers. A doctored video of CNN anchor Anderson Cooper that Trump shared on his Truth Social platform on Friday, which misrepresented Cooper’s reaction to CNN Town Hall last week with Trumpwas created using an AI voice cloning tool.
A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. Online advertising that appeared after President Joe Biden announced his re-election campaignand begins with a strange, slightly distorted image of Biden and the text “What if the weakest president we’ve ever had was reelected?”
Below is a series of AI-generated images: Taiwan under attack; boarded up storefronts in the United States as the economy collapses; soldiers and armored military vehicles patrol the local streets, while tattooed criminals and waves of immigrants create panic.
“An AI-generated look at the possible future of the country if Joe Biden is re-elected in 2024,” the RNC’s description of the ad reads.
The RNC has acknowledged the use of artificial intelligence, but others, including nefarious political campaigns and foreign adversaries, will not, said Petka Stoyanov, global chief technology officer at Forcepoint, a cybersecurity firm based in Austin, Texas. Stoyanov predicted that groups seeking to interfere with US democracy will use artificial intelligence and synthetic media as a way to undermine trust.
“What happens when an international organization — a cybercriminal or a nation state — impersonates someone. What is the impact? Do we have a way out?” – said Stoyanov. “We’re going to see a lot more misinformation from international sources.”
AI-generated political disinformation has already gone viral online ahead of the 2024 election with rigged biden video giving a speech attacking transgender people Images of children created by artificial intelligence they allegedly learn Satanism in libraries.
AI drawings appeared to show a shot of Trump’s mug also tricked some social media users, despite the fact that the former president took no chances when he was arrested and arraigned in Manhattan Criminal Court for falsifying business records. Other images created by artificial intelligence were shown Trump is resisting arrestthough their creator was quick to acknowledge their origins.
Legislation requiring candidates to brand AI-generated campaign ads was introduced in the House by Rep. Yvette Clark, D-N.Y., who also sponsored legislation that would require anyone creating artificial intelligence to add a watermark showing this fact.
Some states have offered your own suggestions to address concerns about deepfakes.
Clark said her biggest fear is that ahead of the 2024 election, generative artificial intelligence could be used to create videos or audio that incite violence and turn Americans against each other.
“It’s important that we keep up with technology,” Clark told The Associated Press. “We have to put up some fences. People can be fooled and it only takes a split second. People are busy with their lives and don’t have time to check every information. Artificial intelligence, when armed, can be extremely destructive during the political season.”
A trade association for political consultants in Washington earlier this month condemned the use of deepfakes in political ads, calling them a “scam” that “has no place in legitimate, ethical campaigns.”
Other forms of artificial intelligence have been a feature of political campaigns for years, using data and algorithms to automate tasks such as targeting voters on social media or tracking donors. Company strategists and technology entrepreneurs hope that the latest innovations will bring some positives in 2024 as well.
Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT “every day” and encourages its employees to use it as well, while any content produced with the tool is subsequently reviewed by human eyes.
Nellis’ latest project in partnership with Higher Ground Labs is an artificial intelligence tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails, all of which are usually tedious tasks in campaigns.
“The idea is that every Democratic strategist, every Democratic candidate will have a co-pilot in their pocket,” he said.
Swenson reported from New York.
The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy. See more about the AP Democracy Initiative here. This content is the sole responsibility of AP.
https://www.wtol.com/article/news/nation-world/artificial-intelligence-election-misinformation-2024/507-654735bd-7651-4254-a54f-f85b96116755