AI Security Researcher

Description
About us
Oligo is a fast-growing cybersecurity startup transforming how organizations protect their applications, cloud environments, and AI systems at runtime. Backed by top-tier investors including Greenfield Partners, Red Dot Capital Partners, Lightspeed, Ballistic Ventures, and TLV Partners, we’re on a mission to make real-time security a reality.
Oligo’s industry’s leading runtime security platform built to stop attacks in real time without stopping the business. We transform security from passive visibility to active protection across applications, cloud services, workloads, and AI systems. By uncovering the deepest layers of what actually runs in production, Oligo helps organizations prioritize exploitable vulnerabilities, detect malicious behavior as it happens, and stop modern attacks in their tracks.
What You’ll Be Doing
As an AI Security Researcher at Oligo, you will play a key role in shaping the future of modern runtime AI security. You will investigate how real-world attacks unfold across AI-enabled applications, agents, tools, executed in cloud environments, and turn those insights into innovative, production-grade security capabilities across our AI Security Posture Management (AI-SPM) and AI Detection & Response (AI-DR) products.
This role is ideal for a deeply technical researcher who is driven to understand AI attacks at their core - not just identify suspicious prompts or surface-level indicators. You will research how prompt injection, tool misuse, agent drift, model abuse, and other emerging AI attack techniques translate into real runtime behavior and security impact. You will then apply that knowledge to build protections that are precise, resilient, scalable, and grounded in how attacks actually work in production.
Specifically, you will:
- Research real-world attacks targeting models, agents, MCP/tooling ecosystems, RAG applications, and AI-enabled production systems.
- Drive research end-to-end - explore, design, develop, validate, and deploy detection logic and protections based on runtime telemetry, exploit PoCs, technical analysis, and threat intelligence.
- Shape AI-SPM capabilities by identifying what telemetry, signals, and modeling are required to continuously discover AI components, classify AI behavior, map risk, and detect unsafe or anomalous usage in production.
- Shape AI-DR capabilities by researching how to detect AI-driven attacks, prove runtime impact, establish causality, and distinguish failed attempts from real compromise.
- Work closely with engineering and product teams to turn deep technical research into practical security capabilities that deliver clear customer value.
Requirements
Qualifications
- 5+ years of experience in security research, vulnerability research, detection engineering, threat research, application security, or a related technical security role.
- Strong understanding of modern attack techniques, including the ability to analyze vulnerabilities, exploitation flows, and attacker behavior in real systems.
- Familiarity with AI application architectures and concepts, including LLM-based applications, agents, tooling layers, models, RAG.
- Experience designing and conducting hands-on technical research, including investigations, experiments, exploit analysis, and PoC development.
- Strong programming skills for code analysis, research tooling, proof-of-concepts, and building vulnerable or instrumented test environments.
- Experience working with runtime telemetry, behavioral signals, or detection-oriented datasets, and the ability to translate research into scalable detection logic.
- High degree of ownership and ability to independently navigate ambiguous technical problem spaces and turn them into structured research plans and actionable outcomes.
- Strong communication and teamwork skills, with a collaborative approach to problem-solving across research, engineering, and product teams.
We’ll be lucky if you have
- Experience with AI security research, including hands-on familiarity with attack techniques targeting AI models, agents, and AI-integrated applications.
- Knowledge of cloud-native attack surfaces, such as Kubernetes and modern distributed application environments.
- Familiarity with programming language internals, exploitation techniques, or low-level system behavior.
- Experience with data science, behavioral modeling, anomaly detection, or statistical analysis in the context of security research.
- Knowledge of programming language internals, exploitation techniques, or low-level system behavior.
- Experience with data science or statistical analysis in the context of security research.


