AI Research | 6/27/2025
AI Chatbots Are Spitting Out CCP Propaganda Thanks to Contaminated Data
A recent study shows that major AI chatbots from U.S. tech giants are echoing propaganda from the Chinese Communist Party due to biased training data. This raises concerns about misinformation and national security.
AI Chatbots and CCP Propaganda: What’s Going On?
So, here’s the scoop: major AI chatbots, like the ones from Google, Microsoft, and OpenAI, are kinda repeating propaganda from the Chinese Communist Party (CCP). Yeah, you heard that right! A study by the American Security Project (ASP), which is a bipartisan think tank, found that the CCP's disinformation campaigns have messed with the data used to train these chatbots. It’s like a bad game of telephone, but with way more serious implications.
The Training Data Dilemma
The big issue here is all about the training data these AI models munch on. They’re fed tons of text and code from the internet, and guess what? The Chinese government has been hard at work manipulating that information. The ASP report tested five popular chatbots, including OpenAI's ChatGPT and Microsoft's Copilot, and the results were pretty eye-opening.
When they asked these chatbots about sensitive topics—like the Tiananmen Square massacre or Taiwan's status—they found that the bots sometimes echoed the CCP's preferred narratives. For example, when asked about Tiananmen Square in Chinese, some chatbots used the term “June 4th Incident” instead of “massacre.” It’s like they’re trying to play nice with Beijing’s narrative.
Who’s Most Affected?
Interestingly, Microsoft’s Copilot seemed to be the most susceptible to spitting out CCP talking points as if they were facts. On the flip side, X's Grok was a bit more critical of the Chinese state narratives. But it’s not just the Western models that are in hot water. Chinese-native AI systems like DeepSeek are built to comply with the country’s strict censorship laws, often dodging politically sensitive questions altogether. Imagine asking about Tiananmen Square and getting a response like, “Sorry, that’s beyond my current scope. Let’s talk about something else.” Classic deflection, right?
The Bigger Picture
Now, why does this matter? Well, when AI models start spreading CCP propaganda, it can skew the information that users around the world are getting. This is especially concerning when chatbots are prompted in Chinese, where state-controlled narratives are everywhere. For instance, Voice of America tested Google’s Gemini and found that when asked in Mandarin, it produced answers about Xi Jinping that sounded just like Beijing’s official propaganda. It’s wild!
Alarm Bells Ringing
U.S. lawmakers are sounding the alarm over this situation, warning that AI tools repeating Beijing’s narratives could undermine democratic values. There’s a growing demand for tech companies to be more transparent about their training data and to come up with better ways to filter out state-manipulated info. Experts are saying that just trying to fix the models after they’ve been trained on biased data isn’t gonna cut it. The CCP has some pretty sophisticated tactics up its sleeve, including creating fake online personas to spread its content.
As we dive deeper into this new era of AI, the practices that China has pioneered in using AI for censorship and surveillance could have serious consequences for internet users, companies, and policymakers everywhere. So, yeah, it’s definitely something to keep an eye on!