
AI coding assistants went from experiment to enterprise customary sooner than virtually any know-how in latest reminiscence. In a latest StackHawk survey of 250+ AppSec stakeholders, 87% of organizations have adopted instruments like GitHub Copilot, Cursor, or Claude Code. Over a 3rd are already at widespread or full adoption.
The productiveness positive factors are actual. So are the safety implications. However the dialog about AI coding danger stays caught on whether or not AI “writes weak code” — which misses the deeper shifts in how software program will get constructed and the way it must be secured.
The Good
I believe this one is clear. Velocity issues in relation to product differentiation and innovation—and AI delivers it. Builders are producing considerably extra code than they did six months in the past. Options that used to take weeks now ship in days.
AI may also enhance baseline code high quality. Assistants educated on hundreds of thousands of repositories have internalized widespread patterns, together with safe ones. For routine stuff — enter validation, customary auth flows, widespread API patterns — AI-generated code is commonly extra constant than what a junior developer writes from scratch. The “AI writes insecure code” narrative ignores that human-written code was by no means a safety gold customary both.
And boilerplate safety is getting automated. Parameterized queries, customary encryption patterns, OAuth scaffolding — these are precisely the place AI assistants shine. The repetitive safety hygiene that builders used to shortcut as a result of it was tedious now will get generated accurately by default.
The Dangerous
The context hole is actual and rising. While you write code line by line, you develop instinct about the way it works, what it touches, the place the sting circumstances stay. While you evaluate AI-generated code, you’re asking a special query: “Does this work?” Not “Is that this safe?” Not “How does this work together with our authorization mannequin?” Builders accepting full implementations with out deeply understanding them is a essentially completely different danger profile than builders constructing these implementations themselves.
Documentation and institutional data endure. AI-assisted growth usually means much less time spent within the codebase. Builders perceive options at a purposeful degree however could not hint the safety implications. That data hole compounds—six months later, no one fairly remembers why a selected API endpoint exists or what knowledge it will possibly entry.
Handbook processes can’t hold tempo. When growth velocity will increase 5-10x, all the things downstream breaks. Safety critiques, structure approvals, asset documentation, assault floor monitoring—any course of that depends on people preserving tempo with growth is now completely behind. Our survey discovered “maintaining with speedy growth velocity and AI-generated code” was the primary problem cited by AppSec stakeholders.
The Dangerous
The danger isn’t the code—it’s the boldness. The actual hazard isn’t that AI writes weak code (although it will possibly). It’s that organizations ship sooner whereas understanding much less about what they’re delivery. Exams go, code critiques approve, options deploy—however the safety workforce’s psychological mannequin of the appliance diverges farther from actuality with every AI-assisted dash.
Shadow functions multiply sooner than ever. That weekend proof-of-concept an engineer spun up “simply to check one thing”? AI assistants make it trivially straightforward to construct, which implies trivially straightforward to neglect. Our survey discovered solely 30% of AppSec stakeholders are “very assured” they know 90%+ of their assault floor. AI-assisted growth makes that quantity worse, not higher.
Safety groups are triaging, not securing. When code quantity will increase however AppSec headcount doesn’t, one thing has to provide. Our knowledge reveals 50% of AppSec groups spend 40% or extra of their time simply triaging and prioritizing findings—figuring out what’s actual earlier than they’ll handle what issues. That ratio was already unsustainable. AI growth velocity breaks it fully.
What This Means for Safety Leaders
The organizations getting this proper aren’t making an attempt to decelerate AI adoption—that ship has sailed. They’re adapting their safety applications for a world the place:
- Visibility is foundational. You’ll be able to’t safe what you don’t know exists. Automated assault floor discovery from supply code isn’t a nice-to-have when builders ship sooner than documentation can observe.
- Runtime validation issues greater than ever. When builders have much less context concerning the code they’re delivery, you want testing that validates how functions truly behave—not simply how code seems to be statically.
- Intelligence beats quantity. The reply to 5x extra code isn’t 5x extra findings to triage. It’s smarter prioritization that connects vulnerabilities to enterprise danger, so finite AppSec assets give attention to what truly issues.
AI coding assistants aren’t going away. The productiveness advantages are too vital, and the adoption curve is already behind us. The query isn’t whether or not to embrace them—it’s whether or not your safety program is constructed for the world they’ve created.