Inkog Red
An autonomous agent that attacks yours.
Red is a red teaming agent. It probes your AI agents for exploitable vulnerabilities, automatically. Prompt injection, tool poisoning, privilege escalation, data exfiltration. Not a test suite. An attacker.
Attack Categories
Red runs autonomous attack campaigns across five categories. Each attack adapts based on your agent's responses.
Prompt Injection
Crafts adversarial inputs that bypass system prompts, output filters, and guardrails. Multi-turn attack chains, not single-shot prompts.
Tool Poisoning
Attacks MCP servers and tool chains. Tests for tool poisoning, argument injection, and unauthorized invocation across permission boundaries.
Privilege Escalation
Probes multi-agent delegation chains for escalation paths. Tests whether agents can be tricked into actions above their authorization level.
Data Exfiltration
Attempts to extract sensitive data through indirect prompt injection, tool-mediated side channels, and memory poisoning.
Defense Validation
Turns Verify's static findings into proof-of-exploit evidence. Separates real risk from theoretical patterns.
Red teaming for agents, not just models
Most red teaming tools test model outputs with adversarial prompts. Red attacks the agent itself. Its tools, delegation chains, data flows, and permission boundaries.
| Red | Model testing tools | |
|---|---|---|
| Targets | Agent architecture, tools, data flows | Model outputs |
| Method | Autonomous multi-step campaigns | Single-shot prompt injection |
| Scope | MCP servers, tool chains, delegation | LLM input/output |
| Output | Proof-of-exploit evidence | Pass/fail results |
Verify + Red: Complete Coverage
Static analysis finds structural flaws. Adversarial testing proves they're real.
Be the first to know when Red launches.
Autonomous red teaming for AI agents. Early access launching soon.