Policy | 7/31/2025
Google Backs EU's AI Code but Cautions Against Innovation Slowdown
Google's decision to endorse the EU's General Purpose AI Code of Practice highlights the ongoing tug-of-war between regulation and innovation in the tech world. While the move aligns Google with other AI giants, it comes with warnings about potential stifling of growth.
Google Backs EU's AI Code but Cautions Against Innovation Slowdown
So, picture this: Google, the tech titan we all know, just threw its weight behind the European Union's General Purpose AI Code of Practice. It’s kinda like when your favorite band decides to join a charity concert—great for the cause, but there’s a bit of tension in the air about how it’ll all play out.
This isn’t just a casual agreement; it’s a big deal. Google’s joining forces with other heavyweights in the AI game, like OpenAI and possibly Microsoft, to commit to a set of principles. These principles are meant to guide the industry as it gears up for the EU AI Act, which is the world’s first comprehensive AI legislation. Imagine a roadmap for AI, but with a few bumps and detours that could slow down the journey.
Now, the GPAI Code of Practice, which Google is signing onto, isn’t just some vague set of guidelines. It was crafted by independent experts after chatting with a bunch of stakeholders. Think of it as a recipe that took a lot of trial and error to perfect. This framework, published back in July 2025, aims to help companies get ready for the legally binding EU AI Act that kicked in last August.
Here’s the kicker: by following this Code, companies can gain some legal certainty. It’s like getting a VIP pass that might mean fewer inspections and less paperwork. The Code is built around three main pillars:
- Transparency: This means companies need to keep detailed records about their AI models. Imagine having to document every ingredient in your favorite dish and then sharing that recipe with your neighbors.
- Copyright: This pillar is all about making sure companies respect EU copyright laws when they’re gathering training data. It’s like making sure you don’t accidentally use someone else’s playlist for your party without asking.
- Safety and Security: This one’s crucial. It lays out best practices for managing risks tied to powerful AI systems. Think of it as putting up safety nets for a high-wire act—necessary for keeping everyone safe.
When Google’s President of Global Affairs, Kent Walker, announced this decision, he painted a picture of a bright future where AI could boost Europe’s economy by a whopping €1.4 trillion annually by 2034. But hold on a second—he also had some reservations. Walker warned that the AI Act and the Code might actually slow down Europe’s AI development. It’s like being excited about a new roller coaster but then realizing the safety checks might delay the opening.
Google’s concerns are pretty specific. They’re worried about potential changes to existing EU copyright laws, administrative hurdles that could bog down new tech approvals, and transparency requirements that might force them to reveal trade secrets. It’s a bit like a chef being asked to share their secret sauce recipe—nobody wants to give away their competitive edge.
Interestingly, Google isn’t alone in this cautious approach. Other big players like OpenAI and Anthropic are on board with the Code, but there’s a notable absence: Meta, the parent company of Facebook and Instagram. They’ve decided to sit this one out, claiming that the voluntary rules introduce too much legal uncertainty. It’s like a group of friends planning a trip, and one person decides not to go because they think the itinerary is too complicated.
This divide in the tech world shows just how complex the conversation around AI governance is. Some folks see this collaborative approach as a way to shape future regulations positively, while others fear it’s just another layer of red tape.
The EU isn’t just stopping at the Code. They’ve got a whole strategy to position themselves as leaders in AI regulation. The AI Act takes a risk-based approach, meaning stricter rules for systems that could pose significant risks. Plus, there’s the AI Pact, another voluntary initiative encouraging companies to adopt key principles early on. Over 100 companies have jumped on board, showing that there’s a lot of interest in engaging with the EU’s direction.
In conclusion, Google’s decision to endorse the EU’s General Purpose AI Code of Practice is a big step, but it’s not without its complications. It shows a willingness to work with policymakers but also highlights the ongoing tension between safety and innovation. As the EU rolls out these new regulations, everyone’s gonna be watching closely to see if they can strike the right balance. Can they create a responsible AI framework without stifling the innovation that could lead to incredible advancements? Only time will tell!