Industry News | 7/4/2025

UK Businesses Dive into AI but Forget Cybersecurity

As UK businesses rush to adopt AI, many are neglecting essential cybersecurity measures, exposing themselves to serious risks. A recent study reveals a troubling gap between AI innovation and the necessary defenses against cyber threats.

UK Businesses Dive into AI but Forget Cybersecurity

So, picture this: you’re at a café, sipping your favorite brew, and you overhear a couple of business folks chatting excitedly about how they’re jumping on the AI bandwagon. Sounds great, right? But here’s the kicker—while they’re all about innovation, they’re kinda ignoring a huge elephant in the room: cybersecurity.

A recent study by CyXcel, a cybersecurity consultancy, paints a pretty alarming picture. It turns out that a significant chunk of UK businesses is diving headfirst into AI without a solid plan for managing the risks that come with it. I mean, can you imagine driving a fancy new car without knowing how to steer? That’s what’s happening here.

The Numbers Don’t Lie

Let’s break it down. About a third of UK organizations see AI as one of their top three risks. But here’s the jaw-dropper: 31% of them don’t even have any AI governance policies in place. It’s like saying, “Yeah, I know I should probably wear a helmet while biking, but I’m just gonna wing it.” And nearly 29% of these businesses are only just now getting around to establishing their first AI risk strategy. Talk about a reactive approach!

This lack of foresight is creating a playground for cybercriminals. With data breaches and operational disruptions looming large, it’s not just about losing money; it’s about reputational damage that can stick around longer than a bad haircut.

The AI Gold Rush

Now, let’s zoom out a bit. The UK has the largest AI market in Europe, valued at over £72 billion, and it’s expected to grow even more. Some reports suggest that AI could add a whopping £550 billion to the UK’s GDP by 2035. That’s a lot of zeros! But with 95% of businesses either using or exploring AI, the attack surface for hackers is expanding faster than you can say “cybersecurity breach.”

While companies are busy leveraging AI for innovation and efficiency, they’re forgetting to build the necessary guardrails. It’s like putting a shiny new engine in a car but forgetting to install brakes.

The New Breed of Threats

Here’s where it gets really interesting—or scary, depending on how you look at it. The threats related to AI are not your run-of-the-mill cybersecurity issues. CyXcel’s research shows that nearly one in five companies are totally unprepared for AI data poisoning. Imagine this: an attacker messes with the data used to train an AI model, causing it to make bad decisions. For instance, a financial fraud detection model could be tricked into thinking fraudulent transactions are legit. Yikes!

And it doesn’t stop there. About 16% of businesses aren’t ready for incidents involving deepfakes or digital cloning. These AI-generated videos can be used to impersonate someone—like a CEO—authorizing a fraudulent fund transfer. That’s not just a bad day at the office; it could lead to multi-million dollar losses.

The Government’s Take

But wait, it’s not all doom and gloom. The UK government and its National Cyber Security Centre (NCSC) are stepping up to the plate. They’re advocating for a “Secure by Design” approach to AI. This means baking security into every step of the AI lifecycle—from design to deployment. They’ve even co-authored international guidelines that lay out best practices, like securing supply chains and developing robust incident response plans specifically for AI-related breaches.

Building a Culture of Security

So, what does an effective AI risk management framework look like? It’s about defining the AI system’s purpose, conducting thorough risk assessments, and implementing both technical and administrative controls. It’s not just about ticking boxes; it’s about fostering a culture of security that builds trust with customers and stakeholders.

The Consequences of Ignoring Risks

Ignoring these risks can lead to severe consequences. Public failures of AI systems—like biased hiring outcomes—can cause lasting reputational damage. And let’s not forget about regulatory penalties. With data protection authorities keeping a close eye on how organizations use AI, businesses that don’t have clear policies in place are walking a tightrope. They risk falling foul of existing regulations like GDPR and being unprepared for future AI-specific legislation.

The Bottom Line

At the end of the day, the race to adopt AI shouldn’t be a race to the bottom on security. The findings from CyXcel serve as a wake-up call. For UK businesses to harness the transformative potential of AI safely and responsibly, they need to shift gears towards proactive, structured, and board-level engagement with AI risk. It’s not just advisable; it’s absolutely imperative.

So next time you’re at that café, and you hear someone raving about their latest AI project, maybe throw in a little reminder about the importance of cybersecurity. After all, it’s better to be safe than sorry!