\n\n\n\n Generative AI Handed Hackers a Cheat Code, and Most Enterprises Aren't Ready - AgntBox Generative AI Handed Hackers a Cheat Code, and Most Enterprises Aren't Ready - AgntBox \n

Generative AI Handed Hackers a Cheat Code, and Most Enterprises Aren’t Ready

📖 4 min read•773 words•Updated Apr 26, 2026

A 2026 UK-wide survey found that 77% of organizational leaders believe AI has increased their company’s cyber risk — yet only 27% feel prepared for it. Read that again. Three quarters of the people running enterprises know they’re more exposed, and barely a quarter of them feel equipped to do anything about it. As someone who spends his days testing AI toolkits and writing honestly about what they can and can’t do, that gap doesn’t surprise me. But it does worry me.

I’ve reviewed a lot of AI tools on this site. Some are genuinely useful. Some are overhyped. But one thing I’ve noticed across almost every category — from code assistants to data pipeline tools — is that security is almost always an afterthought. It’s buried in a FAQ, mentioned vaguely in a terms of service, or left entirely to the user to figure out. And in 2026, that’s not a minor oversight. That’s a liability.

The Numbers Are Hard to Ignore

AI-enabled cyberattacks rose 89% this year. That’s not a rounding error — that’s a near-doubling of incidents in a single year. Researchers at Foresiet documented nine verified attack incidents from 2026 alone, including autonomous breaches and data leaks driven by AI systems operating with minimal human oversight. Meanwhile, projections suggest that global AI-driven cyberattacks in 2025 were on track to surpass 28 million incidents. Even enterprises that deployed AI-powered defenses still faced breaches in 29% of cases.

So the tools meant to protect you are also, in some configurations, the tools being used against you. That’s the uncomfortable reality of where we are right now.

What Generative AI Actually Opens Up for Attackers

When I test a generative AI toolkit, I’m usually asking: does it do what it claims? Is the output useful? Is it worth the price? But increasingly I’m also asking: what happens when someone uses this wrong, or uses it against me?

Generative AI introduces a specific set of attack surfaces that didn’t exist before:

  • Prompt injection — where malicious instructions are embedded in inputs to manipulate an AI’s behavior, sometimes causing it to leak data or execute unintended actions.
  • Data breaches through model outputs — AI systems trained on or connected to sensitive data can inadvertently surface that data in responses.
  • Malicious code generation — generative models can be coaxed into writing functional malware, sometimes with minimal effort from the attacker.

These aren’t theoretical edge cases. They’re documented, repeatable, and getting easier to execute as the tools themselves get more capable.

Shadow AI Is Making This Worse

One of the bigger cybersecurity trends flagged for 2026 is the rise of shadow AI — employees using AI tools that haven’t been vetted or approved by their IT or security teams. This is the enterprise equivalent of someone plugging a random USB drive into a work computer, except the USB drive can talk back and has access to your documents.

From a toolkit reviewer’s perspective, this is partly a product problem. A lot of the AI tools I test are designed to be frictionless to adopt. Sign up, connect your data, start generating. That ease of onboarding is a selling point. But it also means people are connecting sensitive business data to third-party AI systems without anyone in their organization knowing it’s happening.

What Enterprises Are Getting Wrong

The 77/27 split from that UK survey tells a clear story: awareness is not translating into action. Organizations understand the risk intellectually but haven’t built the internal processes, policies, or technical controls to actually address it.

Part of that is speed. AI adoption moved fast, and security programs didn’t keep up. Part of it is also that the vendors selling these tools haven’t made security a visible, central feature. When I review a toolkit and the security documentation is three paragraphs buried at the bottom of a help page, that’s a red flag I now call out explicitly.

What to Actually Look For

If you’re evaluating AI tools for your team or your business, here’s the honest checklist I now run through:

  • Does the vendor have a clear data retention and deletion policy?
  • Is there documentation on how the tool handles prompt inputs — especially in agentic or multi-step workflows?
  • Can you audit what data the tool accessed or generated?
  • Is there any access control, or does everyone with a login get everything?

Generative AI is genuinely useful. I wouldn’t run this site if I didn’t believe that. But useful tools can still be dangerous ones if they’re deployed without thinking through what they touch, what they expose, and who else might be able to use them against you. The 89% surge in AI-enabled attacks isn’t a reason to stop using these tools. It’s a reason to stop using them carelessly.

🕒 Published:

🧰
Written by Jake Chen

Software reviewer and AI tool expert. Independently tests and benchmarks AI products. No sponsored reviews — ever.

Learn more →
Browse Topics: AI & Automation | Comparisons | Dev Tools | Infrastructure | Security & Monitoring
Scroll to Top