With artificial intelligence embedded in everything from internal copilots to production-facing agentic applications, the stakes for security teams have never been higher.
In just a few months, we've gone from experimenting with chatbots to deploying independent AI agents that can query databases, write code, and even execute workflows across sensitive systems.
The rise of AI is not just a story of innovation. It's also one of rapidly expanding risks.
This is exactly why the newly released Latio Q2 2025 AI Security Market Report is so valuable: it represents one of the clearest, most detailed efforts to map this emerging terrain for security teams, from securing end-user access to runtime behavior of AI.
We’re proud that Oligo has been recognized by Latio as an innovator in securing AI for enterprise in this year’s report. More importantly, we are excited to see runtime protection, often a blind spot in security strategies, acknowledged as foundational to safe and sustainable AI adoption.
From Productivity Gains to Production Risk
The Latio report segments AI security into its most urgent use cases: end-user data control, infrastructure posture, vulnerability management, and runtime protection.
Early concerns around AI mostly centered on employee usage, such as copying sensitive code into ChatGPT or misconfiguring Microsoft Copilot. But as teams have moved from low-risk pilots to real AI-driven applications, a new class of threats has emerged, such as insecure agentic systems, malicious package behavior, and logic-layer attacks that can manipulate how models interact with data.
In this landscape, runtime needs to be the frontline.
AI usage in production isn’t just about answering questions anymore. It’s about taking actions – calling APIs, navigating internal tools, and influencing decisions. That’s why runtime protection, once seen as more of a niche concern, is now at the center of securing the AI applications we have come to rely on.
The bottom line is that this shift has created a visibility gap for security teams. While cloud and network layers are well-instrumented, the logic layer where AI operates – making decisions, handling data, and triggering actions – has largely been a black box. That’s no longer acceptable. With AI increasingly powering real-time, user-facing workflows, organizations need runtime insight that’s as dynamic and adaptive as the AI systems themselves.
A Turning Point for the Industry
The Latio report draws a compelling parallel between the current moment in AI security and the early days of cloud adoption. Teams struggled to keep up with a radically new deployment paradigm, and early adopters had to decide whether to wait for legacy vendors to catch up or lean into new, purpose-built solutions.
Just as the cloud-native era gave rise to a new generation of security tools, we’re now entering the AI-native era, where context-aware runtime tools become essential for every modern security stack.
In this new era, runtime isn’t just one layer of defense. It’s the foundation for everything else. Without visibility into how AI is behaving in production, every other control becomes guesswork.
How Oligo Takes a Different Approach to Securing AI
Agentic and generative AI applications can behave in unexpected ways. Oligo monitors these components in real time, alerting you to exploit attempts as they happen, so you can stop threats to AI infrastructure before they escalate.
AI posture management tools show what developers have considered. Oligo shows what’s actually in use. With runtime visibility into AI models and behaviors, security teams can drive smarter, faster outcomes grounded in reality, not assumptions.
With Oligo, security teams benefit from:
- Actionable Defense, Not Theoretical Risk: Legacy scanners can’t keep up with modern AI threats. Oligo monitors the live behavior of AI libraries to detect malicious activity as it unfolds – a shift from reactive scanning to proactive, real-time protection.
- Protection of Inference Servers and Production Workloads: Oligo continuously monitors inference servers and workloads for signs of exploits and attacks – regardless if they’re a known CVE or zero-day.
- Minimal overhead and maintenance: Oligo’s sensor uses less than 1% of CPU, delivering deep visibility without taxing your workloads.
This Isn’t Theory: ShadowRay
While much of the industry hypothesizes about AI risks, Oligo has already uncovered and disclosed real, exploited vulnerabilities in AI infrastructure.
Take ShadowRay, a critical vulnerability we identified in the Ray distributed computing framework, commonly used to orchestrate AI workloads. Our research team observed the vulnerability being exploited in the first known attack campaign targeting AI workloads.
This wasn’t a lab demo or simulation. This was a real attack path in a widely adopted AI framework, discovered and stopped in the wild.
These are the kinds of risks that simply don’t show up in static analysis or scanning. They emerge at runtime, in the wild, and require deep observability to catch and stop them in their tracks.
The future is AI driven. Runtime protection is the pit crew that makes sure you can fix issues on the fly.
Download the Full Report
To explore the full breakdown of the AI security market, including vendor comparisons, buyer guidance, and real-world deployment strategies, download the Latio Q2 2025 AI Security Market Report: https://www.oligo.security/lp/latio-ai-security-guide