As this yr involves an in depth, many consultants have begun to stay up for subsequent yr. Listed below are a number of predictions for a way firms will handle safety in 2026.

Suja Viswesan, vp of know-how at IBM
Shadow brokers will speed up knowledge publicity quicker than we will detect it: As autonomous AI brokers start to function independently throughout enterprise environments, typically outdoors sanctioned workflows, they entry delicate knowledge with minimal human oversight. These brokers replicate and evolve with out leaving clear audit trails or conforming to legacy safety frameworks. They transfer quicker than typical monitoring can comply with. This creates a brand new publicity downside: companies will know knowledge was uncovered however received’t know which brokers moved it, the place it went, or why. Methods that may hint agent knowledge entry throughout machine-to-machine interactions will turn out to be important.”

Gabrielle Hempel, safety operations strategist at Exabeam
In 2026, as AI methods face extra authorized scrutiny and knowledge governance turns into a boardroom problem, the organizations that thrive would be the ones with safety and authorized groups working as companions, not adversaries. The period of throwing incidents over the wall to authorized after they’ve already spiraled is ending.
This convergence will lead to a surge of Cybersecurity Authorized Liaison roles, hybrid specialists who perceive each the MITRE ATT&CK framework and the Federal Guidelines of Civil Process. This may make sure that SOC groups now not function in a authorized vacuum. They might want to perceive what’s permissible, who’s on the hook when issues go unsuitable, and the place disclosure obligations kick in.

Tom Findling, co-founder and CEO at Conifers
Hackers are utilizing AI brokers that may adapt to defenses and carry out advanced job sequences to allow an assault. These AI methods will transfer from experimental to completely operational by 2026. Agentic AI malware will discover environments, adapt to thresholds, and exploit vulnerabilities quicker than any human-driven marketing campaign, and can be capable of run constantly to overload static defenses. In consequence, safety groups utilizing static thresholds or guide investigation will discover their instruments out of date. The subsequent technology of defenses might want to embrace AI methods that may be taught, purpose, and reply in actual time.

Renuka Nadkarni, chief product officer at Aryaka
AI adoption is creating completely new courses of assault surfaces—spanning underlying infrastructure, delicate knowledge pipelines, and the fashions themselves. Every layer is susceptible in several methods and calls for its personal defensive methods. In apply, AI is solely a brand new class of site visitors, and securing it calls for a similar foundational controls we apply to any crucial workload: entry enforcement, risk safety, data-loss prevention, and steady monitoring.
No single level resolution can span this complete panorama. However by treating AI as a brand new site visitors class, unified SASE architectures can tackle a broad portion of those dangers. SASE will play a central position in the way forward for AI safety—delivering multi-layered, distributed protections embedded all through the safety stack, fairly than remoted in a standalone device. SASE performs a major position in the way forward for AI safety with multi-layered, distributed, and embedded throughout all the safety stack—not concentrated in a single device.

Mayur Upadhyaya, CEO and co-founder at APIContext
Gartner’s prediction that over 40% of world organizations will undergo incidents from unauthorized AI instruments by 2030 isn’t simply believable, it’s conservative if proactive measures aren’t taken.
The true threat isn’t just knowledge leakage, it’s the creation of unmonitored, persistent entry factors. Agentic instruments utilizing APIs to “self-serve” crucial features can simply connect with undocumented MCP endpoints, leaving no audit path and bypassing present safety controls. Most enterprises don’t but have a method for managing this class of interplay and that’s the place the hazard lies.
With out guardrails for AI id, scope, and delegation, these instruments can rapidly create systemic threat. Simply as we discovered to observe person entry and API utilization, we now want the identical self-discipline for autonomous brokers. This isn’t nearly blocking instruments, it’s about making trusted entry observable and enforceable”.


Frédéric Rivain, chief know-how officer at Dashlane
Zero-knowledge structure, a safety framework that ensures that solely customers have information and entry to their knowledge, is transferring from a nice-to-have to vital. Buyer and regulator expectations are converging. In 2026, enterprises would require zero-knowledge architectures through which the service supplier can not entry buyer knowledge, and personal data stays with customers. Not solely does this enhance the safety of customers’ data, but it surely’s additionally higher enterprise, lowering legal responsibility and constructing buyer belief.