Beyond Deepfakes: The Dawn of Active AI Persuasion

The phone rang in homes across New Hampshire this past January, carrying a familiar voice. It was Joe Biden, or so it sounded, urging Democrats to “save your vote” by skipping the primary. The catch? It wasn’t him. That call was a fake, a digital ghost in the machine, generated by artificial intelligence. While the immediate reaction might be to scoff at such an obvious hoax, this incident was less a fluke and more a subtle tremor foreshadowing a seismic shift in our political landscape. The era of AI persuasion in elections isn’t just coming; it’s already here, and it’s far more sophisticated than a simple robocall.
Beyond Deepfakes: The Dawn of Active AI Persuasion
For a while now, our fears about AI in politics have largely revolved around deepfakes – those hyper-realistic, yet utterly fabricated, videos or audio clips designed to put words in someone’s mouth or actions on their screen. And rightly so; tools like OpenAI’s Sora are making it frighteningly easy to create convincing synthetic media. We’ve all seen the headlines about AI-generated messages from politicians or even entire fake news clips flooding our feeds.
But here’s the crucial insight: the imitation game is only half the battle. The deeper, more insidious threat isn’t just that AI can imitate people, but that it can actively *persuade* them. Forget merely mimicking a voice; modern AI holds conversations, reads emotions, and tailors its tone to an almost uncanny degree. New research is showing just how powerful this can be, far exceeding the impact of traditional political advertising.
Imagine a political campaign not just crafting a message, but deploying a coordinated persuasion machine. One AI writes the message, another generates the perfect visuals to accompany it, and yet another distributes it across platforms, quietly observing which arguments resonate most with specific demographics. No human intervention needed. This isn’t science fiction; it’s a terrifyingly plausible reality, as AI can now direct other AIs to generate the most convincing content for each target.
The Alarming Accessibility and Affordability of Influence
A decade ago, influencing public opinion online required an army of people – call them “troll farms” or “meme brigades.” It was a labor-intensive, often visible, effort. Today, that kind of work can be automated, cheaply and invisibly. The very same technology that powers your customer service chatbots or helps your kids with their homework can be repurposed to subtly nudge political opinions or amplify a government’s preferred narrative.
And it’s not just about ads or robocalls. This influence can be woven into the very fabric of our digital lives: social media feeds, language learning apps, dating platforms, or even voice assistants. Malicious actors could leverage existing AI tools through their APIs, or build entirely new apps with persuasion baked in from the start. It’s a quiet, pervasive form of influence that bypasses traditional gatekeepers and often goes unnoticed.
Perhaps the most chilling aspect is the affordability. For less than a million dollars, anyone could generate personalized, conversational messages for every registered voter in America. Think about that. Assuming just ten brief exchanges per person, using current API rates for advanced models, we’re talking about a shockingly low cost for such a vast reach. The 80,000 swing voters who decided the 2016 election? They could be targeted for less than $3,000. This isn’t just about big state actors anymore; the barrier to entry for large-scale influence has plummeted.
While this challenge looms globally, the stakes for the United States are uniquely high given the scale of its elections and the international attention they command. If we don’t move swiftly, the 2028 presidential election, or even the 2026 midterms, could become a contest won by whoever automates persuasion first.
A Policy Vacuum in a Rapidly Evolving Landscape
Despite the growing evidence of AI’s persuasive power – studies showing chatbots can shift voter attitudes by significant margins, even outperforming human experts – most policymakers in the U.S. have yet to catch up. The focus remains heavily on deepfakes, ignoring the broader, more subtle threat of AI-driven persuasion. It’s like putting a band-aid on a bullet wound.
Contrast this with global efforts. The European Union’s 2024 AI Act, for instance, classifies election-related persuasion as a “high-risk” use case, subjecting such systems to strict requirements. They understand that tools designed to shape political beliefs are fundamentally different from those optimizing campaign logistics.
Here in the U.S., we lack meaningful lines. There are no binding rules on what constitutes a political influence operation, no external standards for enforcement, and no shared infrastructure to track AI-generated persuasion across platforms. Federal and state governments have made gestures – the FEC applying old fraud provisions, the FCC proposing narrow disclosure rules for broadcast ads – but these are piecemeal efforts that leave the vast digital campaigning landscape untouched.
The burden has largely fallen on private companies like Google and Meta, which have adopted their own disclosure policies for AI-generated political ads. But these rules are voluntary, cover only a fraction of content (paid, publicly displayed ads), and say nothing about the unpaid, private persuasion campaigns that could be most impactful. And let’s not forget the rapidly expanding ecosystem of open-source models, which determined actors can download and deploy off-platform, bypassing all these restrictions entirely. Foreign adversaries, already adept at covert influence, are perfectly positioned to supercharge their operations with these capabilities.
Charting a Path Forward: A Real Strategy
Let’s be clear: we don’t need to ban AI from political life entirely. Certain applications could even strengthen democracy. Imagine a well-designed candidate chatbot helping voters understand complex policies or answering questions directly. Research even suggests AI could help reduce belief in conspiracy theories. The goal isn’t prohibition, but protection.
So, what should a real strategy look like? First, we must actively guard against foreign-made political technology with built-in persuasion. This could be anything from a foreign-produced video game echoing political talking points to a social media platform whose algorithm subtly favors certain narratives. We need coordinated efforts among intelligence agencies, regulators, and platforms to spot and address these risks before they become widespread.
Second, the United States needs to lead in shaping the rules around AI-driven persuasion. This means tightening access to computing power for large-scale foreign persuasion efforts – because many actors will rent existing models or lease GPU capacity. It also means establishing clear technical standards for how AI systems generating political content should operate, especially during sensitive election periods. And domestically, we need to grapple with what kinds of disclosures apply to AI-generated political messaging, carefully navigating First Amendment concerns.
Finally, we need a robust foreign policy response. Adversaries will inevitably try to evade safeguards using offshore servers or intermediaries. Multilateral election integrity agreements should codify a basic norm: states that deploy AI to manipulate another country’s electorate risk coordinated sanctions and public exposure. This requires shared monitoring infrastructure, aligned disclosure standards, and the readiness to conduct coordinated takedowns of cross-border persuasion campaigns. We must treat AI persuasion not as an isolated tech problem, but as a collective security challenge at forums like the G7 and OECD.
The era of AI persuasion isn’t a distant threat; it’s a present fact. While America’s adversaries are preparing, our laws are outdated, our guardrails are too narrow, and oversight is largely voluntary. The last decade was shaped by viral lies and doctored videos. The next will be shaped by something far subtler: messages that sound reasonable, familiar, and just persuasive enough to quietly change hearts and minds, scaled to an unprecedented degree. We need to assess these risks soberly, put real standards in place, and build the infrastructure to enforce them. Because if we wait until we can clearly see it happening, it will already be too late.




