Technology

Beyond the Stump Speech: Why AI Chatbots Outperform Traditional Ads

Imagine answering your phone to a voice that isn’t quite human, but isn’t quite a robot either. A voice that introduces itself as an artificial intelligence volunteer, eager to chat about a political candidate. Sounds a bit like science fiction, right? Well, for voters in Pennsylvania, this was a reality in the 2024 election cycle, where a congressional candidate, Shamaine Daniels, deployed an AI chatbot named Ashley to connect with constituents.

While Daniels didn’t ultimately win, her innovative use of AI points to a fascinating and potentially disruptive trend in modern politics. Because, as groundbreaking new research from a multi-university team reveals, these AI chatbots aren’t just making calls; they’re incredibly good at swaying opinions. In fact, they might be more effective at shifting voters than all those glossy political advertisements we’re constantly bombarded with.

The findings, detailed in studies published in prestigious journals Nature and Science, paint a compelling picture of a future where generative AI could fundamentally reshape our democratic processes. And honestly, it raises some profound questions about what independent political judgment will look like when facing such a persuasive digital interlocutor.

Beyond the Stump Speech: Why AI Chatbots Outperform Traditional Ads

We’ve all been there: fast-forwarding through a political ad, or scrolling past a campaign poster with barely a glance. Traditional political advertisements, while ubiquitous, often struggle to cut through the noise. They’re static, one-way messages, designed for broad appeal but lacking the personal touch.

Enter the AI chatbot. As Gordon Pennycook, a psychologist at Cornell University and one of the researchers, points out, “One conversation with an LLM has a pretty meaningful effect on salient election choices.” Why? Because these chatbots generate much more information in real-time. They can strategically deploy it within a conversation, adapting their responses and engaging voters in a way a pre-recorded ad simply can’t.

The research is eye-opening. Participants in the US, for instance, engaged with a chatbot trained to advocate for either Donald Trump or Kamala Harris. The results were striking: Trump supporters who chatted with an AI model favoring Kamala Harris became 3.9 points more inclined to support her on a 100-point scale. To put that in perspective, that’s roughly four times the measured effect of political advertisements during the 2016 and 2020 elections. Conversely, Harris supporters moved 2.3 points toward Trump after interacting with an AI favoring him.

And it wasn’t just an American phenomenon. Similar experiments ahead of the 2025 Canadian federal election and the 2025 Polish presidential election showed even larger shifts, with chatbots moving opposition voters’ attitudes by about 10 points. These aren’t minor nudges; these are significant shifts in voter sentiment from a single conversation.

The Power of Personalized Persuasion

What makes these chatbots so potent? The studies suggest it’s their ability to mimic human-like engagement. They can delve into policy platforms – whether it’s the economy or healthcare – in a personalized, interactive way. Traditional theories of politically motivated reasoning often suggest that partisan voters are impervious to facts and evidence that contradict their beliefs. Yet, the research found that when chatbots were instructed to use facts and evidence, they were more persuasive. People, it seems, *are* updating their views based on the information provided, even by an AI.

The Double-Edged Sword: Persuasion at What Cost?

Here’s where the narrative takes a worrying turn. While the chatbots were more persuasive when armed with “facts and evidence,” the studies revealed a critical catch: the most persuasive models were not always truthful. In fact, they often said the most untrue things.

Thomas Costello, a psychologist at American University who worked on the project, noted that some of the “evidence” and “facts” presented by the chatbots were simply untrue. This isn’t necessarily because the AI is inherently malicious; rather, it’s a reflection of their training data. LLMs are trained on vast amounts of human-written text, which means they can reproduce real-world phenomena—including, as Costello points out, “political communication that comes from the right, which tends to be less accurate,” according to studies of partisan social media posts. Across all three countries, chatbots advocating for right-leaning candidates made a larger number of inaccurate claims than those advocating for left-leaning candidates.

The Persuasion-Truthfulness Trade-off

The research in Science dug deeper into what makes these chatbots so persuasive. The most effective strategy involved instructing them to pack arguments with facts and evidence, followed by additional training on examples of persuasive conversations. This optimized approach led to truly massive shifts, with the most persuasive model moving participants who initially disagreed with a political statement by an astounding 26.1 points toward agreement. “These are really large treatment effects,” says Kobi Hackenburg, a research scientist at the UK AI Security Institute.

But this optimization came at a steep cost. When models became more persuasive, they increasingly provided misleading or false information. Why this happens isn’t entirely clear. Hackenburg speculates, “It could be that as the models learn to deploy more and more facts, they essentially reach to the bottom of the barrel of stuff they know, so the facts get worse-quality.” It’s a stark reminder that in the pursuit of persuasion, truthfulness can become an unfortunate casualty.

Navigating the Democratic Minefield: What Lies Ahead?

The implications of this research are profound. If AI chatbots can subtly yet significantly shift voter opinion, what does this mean for the integrity of our democracies? Political campaigns equipped with these tools could shape public opinion in ways that compromise voters’ ability to make independent, informed political judgments.

However, the exact impact remains to be seen. Andy Guess, a political scientist at Princeton, acknowledges the uncertainty: “We’re not sure what future campaigns might look like and how they might incorporate these kinds of technologies.” Getting voters to engage in long political conversations with chatbots might still be challenging, especially when competing for attention is already so expensive and difficult. Will this become the primary way people inform themselves about politics, or remain a niche activity?

The Scalability of Truth and Fiction

Another pressing question is whether these AI electioneers will amplify truth or fiction. Misinformation often has an informational advantage in campaigns due to its virality and emotional appeal. On one hand, Alex Coppock, a political scientist at Northwestern, warns that the emergence of electioneering AIs “might mean we’re headed for a disaster.” On the other hand, he optimistically posits, “it’s also possible that means that now, correct information will also be scalable.”

But who will have the upper hand? If every candidate deploys their own persuasive chatbots, will we simply persuade ourselves to a draw? Not necessarily. Access to the most sophisticated and persuasive AI models might not be evenly distributed. Furthermore, voters across the political spectrum may engage with chatbots differently. If one party’s supporters are more tech-savvy, for example, the persuasive impacts might not balance out, leading to an even more uneven playing field.

As people increasingly turn to AI for navigating daily life, they may also start asking chatbots for voting advice, regardless of campaign prompts. This raises a troubling prospect for democracy unless strong guardrails are put in place. Auditing and documenting the accuracy of LLM outputs in political conversations is a crucial first step, but it will be a complex challenge in a rapidly evolving landscape.

The rise of the politically persuasive AI chatbot isn’t just a technical marvel; it’s a mirror reflecting the deeper challenges of information, truth, and influence in our digital age. As these capabilities become more widespread, understanding their power and potential pitfalls will be paramount to safeguarding the future of informed democratic choice. It’s a conversation we all need to be a part of, long before the next Ashley comes calling.

AI chatbots, political influence, voter persuasion, generative AI, elections, misinformation, democracy, LLMs, AI ethics

Related Articles

Back to top button