The Deepfake Election: AI-Generated Ads Are Already Reshaping the 2026 Midterms
At least 15 AI-generated political ads have run since November. There is no federal law against any of it. Republicans are using the technology more than Democrats. And now a $100 million pro-AI PAC is launching to shape the outcome.
What's Happening
A Texas Democratic Senate candidate appeared in a video that looked like a formal campaign address. He stood in front of a Texas flag, looking directly into the camera, and said: "Radicalized white men are the greatest domestic terrorist threat in our country." He never filmed that video. The clip was created entirely by artificial intelligence, produced by the National Republican Senatorial Committee, using old social media posts as the script.
The ad ran. It spread. And in the lower righthand corner, in easy-to-miss font, were the words "AI generated."
That ad is one of at least 15 AI-generated campaign ads that have run in American elections since November 2025, according to NBC News. They've appeared in state, local, and federal races — from school board campaigns in Texas to the Massachusetts governor's race to the New York City mayoral contest. According to a Reuters review of publicly available ads, Republicans are using the technology more frequently than Democrats this cycle.
What the Ads Actually Do
The spectrum of AI use in political advertising ranges from the relatively benign to the potentially election-altering:
Voice cloning: In Massachusetts, the campaign of Republican gubernatorial primary candidate Brian Shortsleeve created a radio ad that used an AI-generated version of Democratic Governor Maura Healey's voice, making her appear to say things she never said about the state economy. The ad contained no explicit AI disclaimer — only a caption describing it as what her ads would sound like "if she was honest."
Deepfake video: The NRSC's ad depicting Texas Democratic Senate candidate James Talarico is one of three recent national Republican ads using deepfake technology. Separately, a campaign backing Republican Representative Mike Collins of Georgia deployed a deepfake video of Democratic Senator Jon Ossoff appearing to say: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram." Ossoff's campaign declined to comment.
Cartoon and parody manipulation: Shortsleeve's campaign also released AI videos depicting Healey as the Grinch and in another as hissing with red eyes. Neither contained explicit AI disclaimers.
AI chatbot phone banks: Campaigns in Pennsylvania have used AI-generated voices to conduct automated voter outreach, with the AI mimicking candidate voices in interactive conversations with voters.
Among Democrats, California Governor Gavin Newsom has used AI-generated videos most visibly — primarily to mock President Trump — but national Democratic campaign committees have not yet deployed deepfake ads in midterm races to the extent national Republican groups have.
The Legal Vacuum
There is no federal law restricting the use of artificial intelligence in political advertising. The Federal Election Commission has not finalized any rules on the subject. What exists instead is a patchwork of state-level legislation that is largely untested in court.
As of March 2026, 26 to 28 states (figures vary by count) have passed some form of legislation addressing AI use in political ads, according to the consumer advocacy group Public Citizen. Most of those laws require disclosure rather than prohibition — meaning campaigns can use deepfakes as long as they include a disclaimer somewhere in the ad. What constitutes adequate disclosure, and how prominently it must appear, varies by state.
Pennsylvania is currently considering a bill that would allow candidates to sue opposing campaigns, political parties, or PACs for "knowingly and intentionally" distributing deepfakes within 90 days of an election — unless the AI use is explicitly disclosed. The bill has not passed.
Social media platforms Meta and X have policies requiring labels on certain AI-generated political content, but both companies have eliminated professional fact-checking operations in favor of user-generated community notes systems, which operate more slowly and inconsistently.
The White House has itself released what Reuters describes as "scores of AI-generated videos and gaming-inspired memes" on official social media — depicting protesters, hyping the Iran war, and disparaging political opponents.
Do These Ads Work?
The evidence suggests yes. A 2025 peer-reviewed study published in the Journal of Creative Communications found that people struggle to identify deepfake videos, and that their political opinions are meaningfully influenced by the misinformation those videos contain.
The cost barrier that once limited sophisticated political advertising is collapsing. Professional campaign ads traditionally cost anywhere from $1,000 to orders of magnitude more, depending on production complexity, talent, and distribution. AI-generated content can achieve comparable visual results at a fraction of that cost — making it accessible not just to well-funded national committees, but to local campaigns with limited budgets.
"Anytime generative AI is used to create messaging or imagery that is misleading, I hope we can all agree that's a negative thing," said Mark Jablonowski, CEO of DSPolitical, a progressive advertising firm. "When you're trying to be deceitful or have something that never existed, that's a big issue."
The $215 Million AI Lobby That Wants to Keep It Unregulated
As AI deepfakes spread through campaigns, a parallel financial effort is underway to ensure AI remains deregulated — and to elect candidates who support that position.
A new political organization called Innovation Council Action announced on March 29 that it plans to spend at least $100 million in the 2026 midterms, according to the New York Times and Axios. The group is led by a former Trump administration official and focuses explicitly on promoting AI deregulation as a political agenda item.
It is not alone. Leading the Future, another pro-AI industry group, has reported raising $50 million from technology figures including Greg Brockman (OpenAI co-founder), Joe Lonsdale (Palantir co-founder), and Marc Andreessen (venture capitalist). Meta is backing a separate super PAC expected to spend approximately $65 million, focused on state-level races.
That is roughly $215 million in disclosed AI-industry midterm spending — dedicated in part to ensuring the regulatory environment for AI in politics stays exactly as permissive as it currently is.
The spending comes as JFK's grandson Jack Schlossberg, running in a New York House race, explicitly positioned himself as "the only candidate who won't take money from Super PACs, corporate PACs, or big AI companies" — a signal that AI industry funding has become a visible enough issue that candidates are running against it.
What's at Stake in November
The 2026 midterm elections will determine which party controls Congress for the final two years of President Trump's term. Democrats are currently favored to capture a majority in the U.S. House of Representatives but face longer odds in the Senate, where the map is unfavorable.
The Iran war — now in its second month — has dominated political conversation. Trump's approval ratings have held steady overall, but an AP-NORC poll released this week found 59% of Americans believe U.S. military action in Iran has been excessive, and 45% are worried about affording gasoline. Democrats have capitalized on both the No Kings protests and anti-war sentiment.
Into this environment, AI-generated political content is arriving without federal guardrails at a moment of unusually high political polarization and low institutional trust. A March 2026 NBC News poll found only 26% of Americans view AI favorably — less popular than ICE, Trump, and the Republican Party — while 56% said they had used AI in the past two months.
The combination — widespread adoption, deep distrust, and political weaponization — is what experts have described as uniquely dangerous heading into a contested midterm cycle.
The Numbers
- At least 15 AI-generated campaign ads have run in U.S. elections since November 2025 (NBC News count)
- 26–28 states have passed AI-in-politics legislation; most require disclosure only, not prohibition
- 0 federal laws currently restrict AI use in political advertising
- $100 million+ — Innovation Council Action's planned 2026 midterm spend (announced March 29)
- $50 million — Leading the Future's reported fundraising from tech figures (Brockman, Lonsdale, Andreessen)
- $65 million — Meta-backed super PAC expected midterm spend
- 26% — American voters who view AI favorably (NBC News/March 2026 poll)
- 56% — Americans who used AI in the past two months (same poll)