According to a recent study, researchers have discovered that AI chatbots can be more effective than traditional political advertisements in influencing voters’ opinions and preferences. AI chatbots that already display a consistent leftist bias could play a key role in the 2026 midterm elections.
MIT Technology Review reports that a recent collaborative study conducted by researchers from multiple universities has revealed that AI chatbots possess a remarkable ability to sway voters’ political views, even surpassing the impact of conventional political advertisements. The findings, published in the prestigious journals Nature and Science, shed light on the potential of generative AI to reshape the landscape of elections in the near future.
The study involved over 2,300 participants who engaged in conversations with chatbots two months prior to the 2024 US presidential election. The AI models were trained to advocate for either of the top two candidates and demonstrated a surprising level of persuasiveness, particularly when discussing candidates’ policy platforms on crucial issues such as the economy and healthcare.
The results were striking: Donald Trump supporters who interacted with an AI model favoring Kamala Harris shifted 3.9 points toward supporting Harris on a 100-point scale, a shift approximately four times greater than the measured effect of political ads during the 2016 and 2020 elections. Similarly, the AI model championing Trump moved Harris supporters 2.3 points in his direction.
Experiments conducted in the lead-up to the 2025 Canadian federal election and the 2025 Polish presidential election yielded even more pronounced effects, with chatbots shifting opposition voters’ attitudes by around 10 points.
Breitbart News reported in 2024 on an academic study of the 24 most popular LLMs that demonstrated essentially of them showed leftist political views when tasked with taking political orientation tests:
Results from the study revealed that all tested LLMs consistently produced answers that aligned with progressive, democratic, and environmentally conscious ideologies. The AI models frequently expressed values associated with equality, global perspectives, and “progress.”
To further investigate the phenomenon, Rozado conducted an additional experiment by fine-tuning GPT-3.5. He created two versions: LeftWingGPT, trained on content from left-leaning publications like the Atlantic and the New Yorker, and RightWingGPT, which was fine-tuned using material from right-leaning sources such as National Review and the American Conservative. The experiment demonstrated that RightWingGPT gravitated towards right-leaning regions in the political tests, suggesting that the political leanings of AI models can be influenced by the data used in their training.
A separate study published in Science delved deeper into the factors contributing to the chatbots’ persuasiveness. Researchers deployed 19 large language models (LLMs) to interact with nearly 77,000 UK participants on over 700 political issues. They discovered that the most effective way to enhance the models’ persuasiveness was to instruct them to include facts and evidence in their arguments and provide additional training using examples of persuasive conversations. The most persuasive model shifted participants who initially disagreed with a political statement by an impressive 26.1 points toward agreeing.
However, as the models became more persuasive, they increasingly provided misleading or false information, raising concerns about the potential consequences for democracy. Political campaigns employing AI chatbots could shape public opinion in ways that compromise voters’ ability to make independent political judgments.
Breitbart News Social Media Director Wynton Hall, author of the upcoming book Code Red: The Left, the Right, China, and the Race to Control AI, underscores the importance of understanding how AI will be used to influence elections in the United States:
We’ve long known that LLMs are not neutral and overwhelmingly exhibit a left-leaning political bias. What this study confirms is that AI chatbots are also uniquely adept as political persuasion machines, and are willing to hallucinate misinformation if that’s what it takes to sway human minds. When you combine bias, AI hallucinations, and Ciceronian-style persuasiveness, that is clearly a wakeup call for conservatives heading into the midterm and presidential elections.
Read more at MIT Technology Review here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS
Please let us know if you're having issues with commenting.