Inkog Verify v1.2

Static Analysis for AI Agents

Find design flaws, infinite loops, and security gaps before they fail you in production.

Start Free Scan
inkog-cli — v1.0.4
inkog scan --diff --baseline main-baseline.json
Compliance:
EU AI ActNISTOWASPISO 42001

CI/CD Security Diff Mode for AI Agent Development

Inkog provides diff-based security scanning for CI/CD pipelines. Compare current scans against a baseline to detect only new vulnerabilities introduced by pull requests. Fail builds only on security regressions, not pre-existing issues. Perfect for GitHub Actions, GitLab CI, and Azure DevOps pipelines. Track security improvements over time with risk score deltas.

Token Burn Attack Detection for LangChain Agents

Inkog detects Token Burning attacks in LangChain and LangGraph agents where unbounded API loops drain your budget. Static analysis for LLM API calls in while True loops without exit conditions. CWE-770 vulnerability detection for LangChain, OpenAI, and enterprise AI applications.

Infinite Loop Detection for n8n Workflows

Inkog scans n8n no-code automation workflows for infinite loops in agentic systems. Detects missing termination guards like Max Revisions checks in Writer-Reviewer agent cycles that cause stuck processes and 100% CPU resource drain. CWE-835 vulnerability detection for n8n, Flowise, and Langflow AI workflows.

Code Injection and RCE Detection for CrewAI Agents

Inkog traces data flow in CrewAI agents to detect unvalidated code execution vulnerabilities. Identifies dangerous patterns like eval() calls with user or LLM-generated input without proper validation. CWE-94 vulnerability detection for CrewAI, AutoGPT, and Python AI agents.

Why Inkog Verify?

Stop agent failures before they reach production.

This Happens More Than You Think

A multi-agent system shipped without termination checks:

while True:
    response = llm.complete(user_query) # No exit condition

# Result: Unexpected API charges in hours
Inkog Verify would have caught this before deployment.

Agent Development Lifecycle

BUILD
Create agents
YOU ARE HERE
VERIFY
Inkog Verify
DEPLOY
Ship to prod
SOON
GUARD
Inkog Runtime

AI agents fail in production

Traditional tools miss agent-specific issues. Infinite loops, runaway costs, and logic flaws slip through code review because they require understanding how LLMs interact with your codebase.

  • Unbounded API loops cause unexpected charges
  • Unvalidated inputs cause unpredictable behavior
  • Logic flaws create cascading failures
  • Missing guardrails lead to runaway costs

Inkog Verify catches issues early

Purpose-built static analysis for AI agents. AST parsing, taint tracking, and cross-file data flow analysis designed specifically for agentic architectures.

  • 20+ vulnerability patterns detected
  • Cross-file taint tracking
  • Works with 15+ frameworks
  • Global compliance reports

Integrates in Minutes

Choose your preferred installation method.

1

Install Inkog

bash
docker run -v $(pwd):/app ghcr.io/inkog-io/inkog:latest /app
2

Scan Runs Automatically

The container scans your mounted directory and outputs results.

3

Review Results

Fix issues before shipping.

Inkog DashboardLIVE
A
Score
15%
12
Projects
3
Critical
-2 ↓
97%
Uptime
Recent Activity
customer-agent
LangChainPassed 2m
sales-bot
CrewAI3 issues 15m
support-agent
AutoGenPassed 1h

What We Detect

20+ anti-patterns and issues that cause agent failures.

Resource Exhaustion

CRITICALCWE-770

Token Bombing

Runaway API loops that drain budgets—the #1 cause of agent cost overruns.

CRITICALCWE-835

Infinite Loops

Missing termination conditions that cause agents to run indefinitely.

HIGHCWE-400

Context Window Exhaustion

Unbounded message history accumulation causing memory overflow.

Input Handling Issues

CRITICALCWE-74

Prompt Injection

Unvalidated inputs that can hijack agent behavior or cause unexpected outputs.

CRITICALCWE-94

Code Injection (RCE)

eval() or exec() called with LLM-generated output.

CRITICALCWE-89

SQL Injection via LLM

LLM-generated SQL queries without parameterization.

Data Leaks & Privacy

CRITICALCWE-798

Hardcoded Credentials

API keys and secrets embedded in source code.

MEDIUMCWE-532

Logging Sensitive Data

PII or secrets written to logs without sanitization.

HIGHCWE-668

Cross-Tenant Leakage

Multi-tenant isolation failures in agent memory.

Governance & Compliance

HIGH

AGENTS.md Governance Mismatch

Validates AGENTS.md declarations against actual code behavior. Catches when agents exceed stated boundaries.

HIGH

Missing Human Oversight

High-risk actions without approval gates. EU AI Act Article 14 compliance.

MEDIUM

Excessive Agency

Agents with overly broad permissions. OWASP LLM08 detection.

MCP & Multi-Agent Security

HIGH

MCP Server Audit

First tool to audit MCP servers before installation. Analyze tool permissions, data flow risks, and input validation.

CRITICAL

Infinite Delegation Loops

Detect circular delegation patterns in multi-agent systems that cause runaway execution.

HIGH

Privilege Escalation

Identify unauthorized capability transfers between agents in A2A communication.

Works With Your Stack

One scanner for 15+ agent frameworks. Python code and JSON workflows.

LangChain
LangGraph
CrewAI
AutoGen
OpenAI Agents
Semantic Kernel
LlamaIndex
Haystack
DSPy
Phidata
n8n
Flowise
Langflow
Dify
Smolagents

Same detection rules across all frameworks. No configuration needed.

Global Compliance Ready

Automated mapping to global AI governance frameworks.

ISO/IEC 42001

Global Standard

  • AI management system
  • Risk assessment
  • Certification-ready

NIST AI RMF

US Framework

  • MAP/MEASURE/MANAGE
  • Risk controls
  • Federal aligned

EU AI Act

European Union

  • Article 15 compliance
  • High-risk audits
  • Traceability

OWASP LLM

Top 10

  • Prompt Injection
  • Output Handling
  • Unbounded Consumption

Your secrets never leave your machine

Source code is redacted locally before transmission. Only the sanitized logic graph is analyzed remotely. API keys, credentials, and secrets stay on your machine.

Local RedactionNo Secrets TransmittedSOC2 Ready

Under the Hood

Built for precision. Powered by AST analysis and taint tracking.

AST Parsing

Tree-sitter based parsing for Python, JS, TypeScript

Data Flow Graph

Cross-file taint tracking with source-to-sink analysis

Control Flow

Trace execution paths to find logic flaws

Universal IR

Framework-agnostic intermediate representation

Bayesian Calibration

Self-learning confidence scores from feedback

Audit Logging

Compliance trail for regulatory requirements

Semantic Detection

Pattern matching on normalized code structure

Memory Analysis

Detect context accumulation and leakage

Ready to Ship Reliable Agents?

Join teams using Inkog to build agents that actually work in production.

View on GitHub