New Studies Reveal How Easily AI Can Sway Voters—And Why That’s Bullish for Digital Assets
AI isn't just writing code—it's rewriting political playbooks. Fresh research confirms what many suspected: targeted AI-generated content can dramatically shift voter sentiment. The digital persuasion machine is here, and it operates at a scale and speed that makes traditional campaigning look like a town crier.
The Mechanics of Influence
Forget broad-stroke propaganda. Modern systems analyze individual data points—social footprints, purchase histories, even typing patterns—to craft hyper-personalized narratives. They test thousands of message variants in real-time, deploying only the most potent. The result? A persuasion engine that bypasses critical thinking and speaks directly to subconscious biases.
A New Frontier for Digital Sovereignty
This isn't just a political story; it's a financial wake-up call. When centralized platforms control both the data and the AI that manipulates it, they wield unprecedented power. It’s the ultimate argument for decentralized systems where influence isn't a product sold to the highest bidder. Think of it as an ad for self-custody—your attention and your assets deserve the same protection.
The finance jab? Legacy institutions are still trying to regulate last decade's social media ads while AI is building the next generation of behavioral futures markets. They're trading paper while the algorithm mines your mind.
The path forward demands tools that can't be weaponized by a single entity. The same cryptographic principles securing digital asset transactions—transparency, verifiability, user control—are becoming essential for protecting democratic discourse. The market for digital integrity is about to go parabolic.
Read us on Google News
In brief
- Studies show that AI chatbots can change voting preferences by several points, up to around 15%, after a few exchanges.
- Their persuasive power mainly relies on public policy arguments, but the more they convince, the more they produce factual errors and biases.
- By directly dialoguing with voters, these AI chatbots manage to shape the perception of programs and quietly influence their choices.
When a few artificial intelligence messages are enough to move a vote.
Researchers at Cornell University and the UK AI Security Institute tested a very simple situation: a voter, a candidate, and a political chatbot. First, participants rated a candidate. Then, they discussed with an artificial intelligence chatbot programmed to defend that candidate. Finally, they rated the candidate again. On the surface, nothing extraordinary: a brief conversation, a few arguments, a revised rating.
The results, however, are anything but trivial. In the United States, before the 2024 presidential election, a simple exchange of this kind was enough to shift a candidate’s rating by several points, especially when the bot supported the opposite camp to the participant’s initial preference.
The same pattern appears in Canada and Poland with shifts of up to about ten points on a 0 to 100 scale.
Above all, the effect is not symmetrical: a chatbot preaching for a candidate already appreciated reinforces convictions, but one defending the “wrong” camp sometimes manages to crack the resistance. In other words, AI does not just comfort the convinced; it begins to undermine the certainties of opponents.
The More AI Talks Politics, the More Persuasive — and Error-Prone — It Becomes
Studies agree on one key point: what persuades the most are messages centered on public policies, whether economic measures, taxation, security, or health, rather than personality elements or storytelling. When the chatbot presents numerical arguments, program comparisons, and references to facts, real or supposed, the impact on voting intentions is markedly stronger.
But this power has a cost. Researchers note a harsh trade-off between persuasion and accuracy: the most convincing models are also those producing the highest number of inaccurate statements.
In several experiments, bots favoring right-wing candidates generated more errors or misleading claims than those aligned with left-wing candidates, revealing an imbalance in what the models “really know.”
Meanwhile, the second study conducted on 19 AI language models and nearly 77,000 adults in the UK shows that the key is not so much the model size as how it is guided through AI prompts. Instructions that encourage these models to introduce new information significantly increase persuasive power but again degrade factual accuracy. More arguments, more impact, less truth.
In this context, the rise of AI no longer limits itself to just political chatbots. Tether has just bet 70 million euros on Generative Bionics to accelerate the development of humanoid AIs, illustrating how these systems, VIRTUAL or embodied, are expected to interact increasingly with the public and influence opinions on a large scale.
Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.