Ethics | 7/27/2025

How Chinese AI is Spreading State Narratives Worldwide

As low-cost Chinese AI spreads globally, it intentionally embeds Beijing's narratives, threatening digital freedom and democratic institutions.

A New Era of AI and Propaganda

So, picture this: you’re scrolling through your social media feed, and you come across a chatbot that claims to be able to answer all your questions. Sounds cool, right? But here’s the kicker—this chatbot is actually a product of Chinese tech, and it’s not just spitting out random facts. It’s been designed to promote Beijing’s narratives, and that’s kinda scary.

The Numbers Don’t Lie

A recent audit from NewsGuard—a company that rates the credibility of news sources—revealed something pretty alarming. They found that five major Chinese AI chatbots, like Baidu's Ernie and Alibaba's Qwen, failed to correct or even identify false claims made by the Chinese government about 60% of the time. Imagine asking a chatbot about Taiwan’s elections and instead of getting a clear answer, it just echoes the misleading claims. That’s exactly what happened during their investigation.

For example, when they tested these chatbots with ten widely circulated false narratives from early 2025, they found that the bots were more likely to repeat the misinformation than to set the record straight. It’s like having a friend who just believes everything they read online without bothering to fact-check.

Why Are These Bots So Popular?

Now, you might be wondering why these AI systems are gaining traction globally. The answer is pretty straightforward: they’re cheap and open-source. Unlike Western counterparts like OpenAI’s ChatGPT, which can cost a pretty penny, these Chinese models are accessible to anyone with a decent internet connection.

Take DeepSeek, for instance. They’ve managed to create high-performance AI models at a fraction of the cost and time it takes for U.S. companies. This affordability is like a siren song for international banks and universities, especially in places like the Middle East and Europe. They’re drawn in by the promise of flexibility and low costs, but they might not realize they’re also getting a side of state propaganda with their AI.

The Bigger Picture

But wait, it gets more concerning. The spread of these AI models isn’t just about tech; it’s about control. China’s strategy isn’t limited to its own borders. They’re using AI to manipulate narratives both domestically and internationally. Think about it: if you can control the information people receive, you can shape public opinion.

This has been particularly evident in China’s efforts to undermine Taiwan’s democracy. They’ve been using AI-driven disinformation campaigns, deepfakes, and even military strategies to push their agenda. It’s like a game of chess, where they’re trying to outmaneuver not just Taiwan, but the entire world.

The Threat to Freedom

Here’s the thing: the integration of these AI systems into everyday life poses a significant threat to digital freedom. As these chatbots become more embedded in our daily routines, the risk of absorbing state-sponsored propaganda without even realizing it increases. It’s like that catchy song you hear on the radio—you don’t even notice how often it plays until it’s stuck in your head.

This phenomenon, often referred to as "AI-driven authoritarianism," could undermine democratic institutions worldwide. If people start to believe the narratives pushed by these AI models, it could lead to a consolidation of power in the hands of a few.

A Call for Transparency

So, what can we do about it? The international community needs to step up and foster AI innovation that prioritizes transparency and freedom of expression. We can’t let artificial intelligence become just another tool for censorship and control. Instead, it should be a force for progress, helping to spread accurate information and empower individuals.

In the end, it’s all about being aware. The next time you chat with an AI, remember that it might not just be a friendly assistant—it could be a mouthpiece for a much larger agenda. Stay curious, stay informed, and don’t let the bots do your thinking for you!