news

How 2024 presidential candidates are using AI inside their election campaigns

How 2024 presidential candidates are using AI inside their election campaigns
Budrul Chukrut | Lightrocket | Getty Images
  • With the 2024 presidential election less than a year away, AI is already an active participant in U.S. politics, largely in government agencies and the campaigning space of elected officials.
  • Because of AI's ability to take over many tasks, experts say staffing campaigns to the brim will no longer be necessary for candidates anxious to shore up their war chests.
  • But given many Americans' distrust of the political establishment, the lack of AI regulations, and rising concern about deepfakes, these technological integrations are likely to remain behind the scenes, experts say.

Even with President Joe Biden's executive order on "safe, secure, and trustworthy" artificial intelligence, regulations to oversee AI may be slow to come. All the while, AI is moving ahead, for better and worse. One area it's seeping into is government and politics itself.

With the 2024 presidential election less than a year away, AI has become an active participant in the race, largely inside government agencies and the operations of candidates and elected officials. AI can potentially shift the fate of an election, but given many Americans' distrust of politicians and the lack of AI regulations, these technological integrations are likely to remain behind the scenes, experts say.

How will AI-assisted elections look over the next year and beyond?

Kevin Pérez-Allen, chief communications officer for nonpartisan health-care advocate organization United States of Care, said AI will help with data analysis of voting patterns, crafting resident messages, and analyzing social media habits.

Pérez-Allen has decades of experience as a political campaign communication professional and has seen campaigning evolve with technology. For instance, ChatGPT is already producing first drafts of speeches and campaign marketing materials, as well as being used in fundraising emails and texts, he said.  

There's a lot on the campaigning front that AI can replicate, said Pérez-Allen. But while information gathering, data analysis and writing are a few categories in which we're seeing this, he added, it "can't replicate people walking districts, can't replicate that in-person voter engagement."

Still, because of the ways in which AI can trim the fat off the work, Pérez-Allen said staffing campaigns to the brim will no longer be necessary.

Deepfakes on the campaign trail

Sinclair Schuller, co-founder and managing partner of AI implementation firm Nuvalence, who has helped governments integrate AI, can't help but look at the risks of AI in the election space. "We're going to see a lot of fiction being created both for and against candidates," Schuller said. "I think that's where a lot of confusion will emerge as a consequence."

Schuller refers to deepfakes, or AI-generated video, images and audio that did not genuinely occur. He said political campaigns are often a case of "whoever shouts something first sticks," and that political operatives and followers on the margins could certainly create and publish deepfakes putting opponents in inappropriate situations.

It's already being done from the level of the presidential campaign to local races.

During Chicago's February 2023 mayoral primary election, a deepfaked video surfaced of candidate Paul Vallas appearing to approve of police brutality. Vallas ultimately lost the race. It's impossible to say how much of an impact this video had.

"We will hopefully reach a spot where if you don't have an authenticated, direct line to the source, then we can't trust it," Schuller said. 

Meta recently updated its rules about ads during election season.

The battle between the falsely generated content and the detection mechanisms that try to eradicate it will surely ramp up. Using AI itself to detect and mark AI-generated content is better than retroactively fact-checking content, Schuller said, because it can be applied during the posting of the content and doesn't wait until people have already absorbed — and believed — the information.

The need to police online content comes after a period when many of the largest tech firm have been cutting staff devoted to fighting misinformation. "The 2024 elections are going to be a mess because social media is not protecting us from false generated AI," Eric Schmidt, former CEO of Google who co-founded Schmidt Futures, recently told CNBC.

Even Pérez-Allen, a self-proclaimed optimist when it comes to integrating AI into the election process, understands this reality. "We're already seeing allegations being used to curry favor one way or another in some of the military conflicts going on right now around the world," he said. "And we're only going to see increases in that type of communication as we lead into the 2024 election."

How AI in politics could be a positive

AI integration into the election space could lead to a positive trajectory — if regulation allows it.

Pérez-Allen talks about the voter bloc monolith narrative that comes out every cycle. "People tend to move Latinos all into the same monolith bloc, they move Black voters into the same monolith, and they move suburban women to this big monolith as though they all think the same," he said.

AI's potential to create hyper-localized, hyper-personalized political campaigns could eliminate that narrative.

Meanwhile, it could increase information accessibility. "Instead of just reading someone's policy positions on their website, there would be an AI chatbot with the platform that gives you the answers and, backed by data, makes it feel like having a direct line to the campaign," Pérez-Allen said.

He thinks of Latinos and others in diverse communities who could get campaign messages available in their own language and dialect. In-booth translation and transcreation, the latter of which considers the nuances of the messaging in translation as a whole rather than just individual words being accurate, could also benefit from AI usage. 

Of course, none of this is possible without regulation. While Biden's executive order made headlines, there's no straight path to implementation.

Jordan Burris, vice president and head of public sector strategy for digital identity verification platform Socure and board member of Identity Theft Resource Center, has thoughts on this. "The directive's lofty goals will result in implementation complexity and paralysis without the right sense of urgency and alignment to execute," Burriss said. "Beyond words on paper, there must be a change in budgets, culture, and practices if we are to be successful in turning the corner."

As for Congress' ability to address technology issues, Pérez-Allen looks back to their inaction on social media regulation as an example of their collective inability to make sense of emerging — and sometimes longstanding — technologies. Section 230 of the Communications Decency Act of 1996 remains the most recent regulation on social media.

Will Hurd, former CIA officer and Texas congressman, who outlined a plan for AI policy during his 2024 campaign, is out of the presidential race. Pérez-Allen has doubts that any of the remaining presidential candidates will seriously discuss it in a public forum "in the absence of a massive AI-caused disaster at home or abroad."

Given the criticism voting by mail has faced in recent years, despite being around for decades (many states eased absentee voting restrictions in the 1980s), integrating AI into actual elections right now would cause a "full-blown crisis," Pérez-Allen said.

Beyond politics, government agencies are already using chatbots to manage simple questions. Julie Su, Biden's Acting Secretary of Labor, recently told CNBC that even with the DoL intent on making sure AI does not displace workers, the department is using AI for certain job functions, such as benefits claims verification, and the department, its workers and the public are seeing gains.

"In the next one to maybe two years, we'll see these chatbots attached to systems of record," said Schuller, which could help agencies use AI more complexly, such as for fraud detection in license applications and other use cases. He added that the large language model is "becoming more of a large anything model" that can interpret uploaded charts and other visuals to help guide queries in the right direction.

As the 2024 election nears, familiar factors like accessibility, personalization and information from the right sources may look different with AI in the loop.

Copyright CNBC
Contact Us