CRITICALZero Day
Global
AI security is repeating endpoint security’s biggest mistake
·Source: CSO Online
Updated:
Executive Summary
The security industry is experiencing déjà vu, and most teams haven’t recognized it yet. If you were in the trenches during the early 2000s, you remember the antivirus arms race. IT teams buried under signature updates. Configuration baselines checked obsessively. Patch cycles treated as the primary defense. Meanwhile, attackers pivoted. They wrote malware that matched no known signature and walke
Analysis
The security industry is experiencing déjà vu, and most teams haven’t recognized it yet. If you were in the trenches during the early 2000s, you remember the antivirus arms race. IT teams buried under signature updates. Configuration baselines checked obsessively. Patch cycles treated as the primary defense. Meanwhile, attackers pivoted. They wrote malware that matched no known signature and walked through the front door while the guards were checking outdated IDs. The posture-first approach revealed its limitations as the endpoint attack surface exploded. The industry faced visibility gaps and realized you cannot harden what you cannot fully see. The posture-first approach wasn’t wrong. It was incomplete. As the endpoint attack surface exploded, the industry realized that you cannot harden what you cannot fully see. Limited visibility hindered effective hardening, driving the shift toward behavioral detection as an operational necessity. AI security is at the beginning of that same arc. The teams that recognize it now get to skip the painful middle chapter. The endpoint era’s hard-won lesson The first generation of endpoint security asked answerable questions: Is antivirus installed? Are patches current? Does the configuration match the baseline? For a while, answering those questions felt like enough. Then the surface expanded. Laptops left the perimeter. Zero-days made signatures irrelevant at the moment they mattered most. The industry responded by building tools that stopped asking “does this file look bad?” and started asking “what is this process actually doing?”. That reframe changed everything. Instead of matching against lists of known bad, defenders began watching process trees, API call sequences, lateral movement patterns and privilege escalation chains. Behavior became the signal. Posture checks tell you what should be true. Behavioral detection tells you what is actually happening. Most AI security is still at the posture phase Look at where most organizations are with AI security today. Model cards, AI-specific SBOMs, input and output filters, prompt injection guardrails and access controls around model APIs. These are valuable controls, but they reflect a posture-based approach. To truly enhance security, organizations must recognize the importance of shifting to behavior-based strategies that monitor actual system actions. They’re brittle in the same ways, too. The AI surface is expanding faster than any team can harden it: open-source LLMs deployed without procurement review, third-party AI APIs embedded inside SaaS tools, autonomous agents granted broad system access, RAG pipelines sitting on top of sensitive internal data. The phrase “shadow AI” exists for the same reason “shadow IT” did before it. People adopt capabilities faster than policy can follow. The OWASP Top 10 for Agentic Applications 2026 is a welcome and necessary framework. But read it carefully and you’ll notice that most of its controls are posture-oriented: constrain scope, validate inputs, enforce least privilege. These are the right first steps, but they’re not a complete strategy. We know this because we’ve already lived through a version of this story. The core tension is identical to what endpoint defenders faced two decades ago. You can’t patch your way out of a system you don’t fully control. With AI, the surface is more dynamic, more opaque and more deeply embedded in business logic than endpoints ever were. An AI agent doesn’t just sit on a device. It calls APIs, retrieves internal data, takes actions across systems and generates outputs that ripple downstream. The blast radius of a compromised or misbehaving agent is a problem entirely different from that of a compromised laptop. Why behavioral detection becomes the lever While you may not control every AI surface, monitoring what these systems actually do empowers your team to stay ahead of threats and feel capable in managing AI risks. Behavioral signals are already being generated in environments that aren’t instrumented to catch them. This includes unusual data access patterns from a RAG pipeline, prompt injection artifacts surfacing in model outputs, unexpected tool calls from an agent operating outside its intended scope, token velocity anomalies pointing to automated abuse, and output drift that suggests something upstream has changed. None of these is hypothetical. They’re observable today. The parallel to EDR is direct: just as endpoint behavioral tools watch process trees and API call chains, AI behavioral monitoring watches action sequences, what data was retrieved, what tools were invoked, what was generated and in what order. A single anomalous output is noise. A sequence of anomalous actions is worth investigating. This is what gives SOC teams something to operate on. Posture is an audit checkpoint. Behavior gives you a triage queue. There’s a real difference between telling an analyst “This agent has broad permissions” and telling them, “This agent queried sensitive documents, formatted the output and initiated an outbound connection in a sequence it’s never run before.” The first is a finding. The second is an incident. A concrete path forward The endpoint era offers a practical sequence, not just a cautionary tale. Don’t abandon posture work. It’s table stakes, not a strategy. Keep the model inventory current, enforce access controls and implement the OWASP guardrails. Just don’t let posture become the ceiling of your program. Start logging AI system behavior now, even if you’re not fully analyzing it yet. Data debt compounds and having behavioral history is essential for future detection logic. Building a behavioral baseline early helps close gaps and prepares your organization for proactive AI security measures. Prioritize your highest-agency surfaces first ー autonomous agents with broad system access, RAG pipelines connected to sensitive internal data, any LLM feature that faces external users or triggers downstream automations ー these are your highest-risk surfaces and the right place to start. Think in sequences, not just single events. That’s the core lesson EDR already taught. An unusual API call is interesting, but an agent retrieving sensitive documents, formatting the output and making an unexpected outbound call forms a story. The sequence of actions provides the true signal for detection. Finally, close the gap between your AI security program and your SOC. Most AI security work today sits inside the AI governance function or the data team. That’s the wrong home for behavioral detection. The SOC has the triage muscle, the incident response playbooks and the tool integrations. Getting AI behavioral telemetry in front of SOC analysts is partly a technology problem. It’s mostly an organizational one. The signal is already there The endpoint security story didn’t end badly. It matured. The teams that invested in behavioral telemetry before they needed it built programs that held up when the threat model shifted. Those that doubled down on static controls had to rebuild from scratch when reality caught up with them. AI behavior is already generating signals in your environment. The question isn’t whether the shift from posture to behavioral detection will happen in AI security. It will, for the same reasons it happened at the endpoint. The question is whether your team will be ready to act on those signals when it counts. The window is open. It won’t stay that way. This article is published as part of the Foundry Expert Contributor Network. Want to join?