#1 out of 21562.0 est. views
technology1h ago
AI chatbots used inaccurate information to change people's political opinions, study finds
Nbcnews.com and 1 more
- New UK study finds AI chatbots can significantly shift political opinions, involving nearly 80,000 participants and 19 models.
- Researchers warn persuasiveness may come at the cost of truthfulness in AI outputs.
- Post-training tweaks and higher information density improved a model’s persuasiveness in the study.
- The research team emphasizes the potential for AI to exceed some human persuaders given rapid information generation.
- The findings show larger inaccuracies in outputs from newer frontier models like GPT-4.5 compared with older models.
- The UK AI Security Institute and partners collaborated with Oxford, LSE, MIT, and Stanford on the study.
- The study involved participants debating topics such as taxes, immigration, and cost of living.
- The research warns about manipulation risks if AI chatbots are used for political ends in real-world settings.
- Experts acknowledge both risks and legitimate uses of AI persuasion when transparency is applied.
- The study collected data before OpenAI released its latest model, GPT-5.1, as context for analysis.
- Overall, the research indicates AI's information generation capability could outpace human persuasion in some settings.
Vote 1








