DHS warns of threats to election posed by artificial intelligence

Written by on May 20, 2024

DHS warns of threats to election posed by artificial intelligence
Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

(WASHINGTON) — The threat posed by artificial intelligence programs is real and creates a serious challenge going into the 2024 election, according to a new federal assessment.

The analysis, compiled by the Department of Homeland Security and obtained by ABC News, outlines how, with less than six months before Election Day, next-generation technologies meant to fuel advancement also offer opportunities for misuse that could threaten the democratic system’s bedrock of elections.

“As the 2024 election cycle progresses, generative AI tools likely provide both domestic and foreign threat actors with enhanced opportunities for interference by aggravating emergent events, disrupting election processes, or attacking election infrastructure,” the May 17 document said. Those tools, the bulletin said, can be wielded to “influence and sow discord” in upcoming U.S. elections by those who view them as “attractive” and “priority” targets.

“This is not a problem for the future. This is a problem of today,” said John Cohen, the former intelligence chief at the Department of Homeland Security, and now an ABC News contributor. “Foreign and domestic threat actors have fully embraced the internet, and they are increasingly using advanced computing capabilities like artificial intelligence to conduct their illegal operations.”

Already, those attempting to target elections have done so “by conducting cyber-enabled hack-and-leak campaigns, voice spoofing, online disinformation campaigns, and threatening or plotting attacks against symbols of the US elections,” the bulletin said.

And now, the analysis warns, the innovative abilities of generative AI can be used against future elections.Those tools can be exploited “to confuse or overwhelm voters and election staff to disrupt their duties” by creating or sharing “altered” or deepfaked pictures, videos or audio clips “regarding the details of Election Day, such as claiming that a polling station is closed or that polling times have changed, or to generate or promote other tailored false information online.”

On the eve of the New Hampshire primary in January, a robocall seeming to impersonate the voice of President Joe Biden circulated, encouraging recipients of the call to “save your vote” for the November general election, rather than participate in the state’s primary, according to audio obtained at the time by ABC News.

That “generative AI-created audio message” was specifically flagged in the DHS analysis, which also noted “the timing of election-specific AI-generated media can be just as critical as the content itself, as it may take time to counter-message or debunk the false content permeating online.”

“This may be one of the most difficult elections for Americans to navigate finding ground truth in our lifetimes,” said Elizabeth Neumann, a DHS assistant secretary during the first years of Donald Trump’s presidency, and now an ABC News contributor. “It’s not just whether a politician is telling you the truth, but you won’t even be able to trust your own eyes at the images you’re seeing in your social media feeds, in your email and possibly even traditional media, if they don’t do a good enough job vetting the material.”

The 2024 race has been marked by increasingly toxic rhetoric, and the intermingling of inflammatory campaign trail hyperbole and courtroom theatrics, as Trump faces four criminal cases in which he maintains his innocence. Hate speech, misinformation and disinformation are running rampant on social media and in real life — even as rapidly evolving technology remains vulnerable, experts said. Meanwhile, wars in the Middle East and Ukraine continue overseas, dividing Americans’ views on foreign policy, with the conflicts rippling out in protests at major U.S. college campuses from coast to coast.

“Threat actors can attempt to exploit deepfake videos, audio content, or other generative AI media to amplify discontent,” the DHS analysis said. “A well-timed deepfake or piece of AI-generated media for a targeted audience could spur individuals to take action that may result in violence or physical disruptions directed toward the elections or candidates.”

The threat landscape has grown “more diverse and more complex,” and protecting the integrity of U.S. elections faces more challenges than ever due to the accelerating sophistication of artificial intelligence, top intelligence officials told lawmakers last Wednesday.

“Using every tool we have is critical as the challenge is expanding,” Director of National Intelligence Avril Haines told a Senate committee holding a hearing focused on threats to the 2024 elections. “There are an increasing number of foreign actors, including non-state entities who are looking to engage in election influence activities,” she said, adding “relevant emerging technologies, particularly generative AI and big data analytics, are increasing the threat by enabling the proliferation of influence actors who can conduct targeted campaigns.”

“Innovations in AI have enabled foreign influence actors to produce seemingly authentic and tailored messaging more efficiently, at greater scale,” Haines added. “Even as the threat landscape is becoming increasingly complicated, it is my view that the U.S. government has never been better prepared to address the challenge,” in part thanks to lessons learned from the 2016 presidential election.

Authorities at every level need to be ready to defend against artificial intelligence disseminating fake news at this delicate moment, experts said.

“One of the most important things we need to be doing now is educating and preparing the public. Because they will be the people who are being targeted with this content — the public is the intended target — and the objective is to influence how people behave,” Cohen said.

“State and local officials need to have a plan, so that when this content is detected, they can use trusted sources of communication to counteract and correct inaccurate information. Once that content hits, it’s going to spread across the online media ecosystem rapidly, and has to be counteracted immediately,” Cohen added. “Law enforcement and our security community have been slow to adapt to this rapidly evolving threat environment. We’re still using the strategies of yesterday to deal with a threat of today. In a way, it’s bringing a knife to a gunfight.”

Copyright © 2024, ABC Audio. All rights reserved.


Reader's opinions

Leave a Reply

Your email address will not be published.



Current track

Title

Artist