Introducing Inkog Deep: Semantic Security Analysis for AI Agents
Inkog Deep goes beyond pattern matching — it understands your agent's purpose, maps its architecture, and explains why findings matter.
Most security tools stop at pattern matching. They find what's wrong but can't tell you why it matters, whether your agent's purpose makes it critical, or what to actually do about it.
Inkog doesn't work that way. We built two analysis engines — a deterministic static engine and a semantic engine we call Deep — and run them together on every scan. The static engine traces data flow, tracks tainted inputs across files, and catches code-level vulnerabilities. Deep reads your agent, understands what it's designed to do, and turns raw findings into context-aware security intelligence.
Deep is now in beta.
Why Two Engines
Static analysis is fast, deterministic, and reliable. It gives you the same results every time. It catches SQL injection, prompt injection, missing rate limits, unsafe environment access — the things that have clear code-level signatures.
But it has a blind spot. It can tell you that user input reaches a SQL query. It can't tell you whether your agent's purpose makes that a critical risk or a low-priority edge case. It can flag a missing human approval gate, but it can't explain that your financial trading agent specifically needs one before executing live trades.
Deep fills that gap. It reads your agent's code the way a senior security engineer would — understanding intent, assessing context, and producing explanations that compliance teams can actually use.
Neither engine alone is sufficient. Together, they cover the full surface.
What Deep Does
Understands What Your Agent Is For
A customer support bot, a code review assistant, a financial analyst — each has a different risk profile. A missing rate limit on an internal logging agent is noise. The same missing rate limit on a customer-facing agent with tool access to external APIs is a real problem.
Deep reads your agent and understands its purpose. Findings are prioritized based on what actually matters for your specific use case — not generic severity tables.
Detects Tool Poisoning
This is a threat class that static analysis cannot address. MCP servers and third-party tools expose descriptions that instruct agents how to use them. A poisoned tool description can contain hidden instructions, invisible Unicode characters, or overly broad input schemas designed to manipulate your agent's behavior.
Deep analyzes tool descriptions for:
- Hidden instructions — directive language designed to override agent behavior
- Invisible characters — zero-width Unicode used to embed concealed text
- Schema manipulation — overly broad input schemas that accept arbitrary data
- Permission escalation — tools that instruct the agent to invoke other tools
Your agent trusts its tools. If a tool description is compromised, your agent follows the attacker's instructions — and no static scanner will catch it because the vulnerability isn't in your code.
Explains Why Findings Matter
Here's what the same finding looks like from each engine, on a financial trading agent:
Static engine:
CRITICAL missing_human_oversight
File: agent/trader.py:47
Category: governance
OWASP: LLM09 - Overreliance
Message: Agent performs sensitive operations without human approval gateWith Deep:
CRITICAL missing_human_oversight
File: agent/trader.py:47
Agent: FinancialTradingAgent — executes buy/sell orders via Alpaca API
Risk: This agent places live market orders (execute_trade tool)
with no human confirmation step. A hallucinated ticker
symbol or misinterpreted market signal results in real
financial loss.
Context: The agent has access to 3 tools: execute_trade, get_portfolio,
analyze_market. Only execute_trade requires oversight — the
other two are read-only.
Fix: Add a human approval gate before execute_trade invocations.
The agent's run_loop (line 52) should yield for confirmation
when the selected tool has financial side effects.
EU AI Act: Article 14(1) — high-risk AI systems shall be designed to
be effectively overseen by natural persons
NIST: GOVERN 1.2 — Organizational processes for AI risk managementThat's the difference between a line item in a spreadsheet and an actionable remediation brief.
Maps to Compliance Frameworks
The static engine maps to OWASP LLM Top 10. Deep extends coverage to EU AI Act (Articles 14, 15), NIST AI RMF (GOVERN, MAP, MEASURE, MANAGE), GDPR (data processing in agent workflows), and MITRE ATLAS (adversarial threat techniques).
Each finding includes specific article references and remediation guidance mapped to the applicable framework. For teams preparing for EU AI Act enforcement in August 2026, this means automated evidence generation — not manual control mapping.
MCP Server Auditing
The MCP ecosystem is growing fast. Developers are connecting agents to third-party servers for database access, file operations, API integrations, and more. Each server exposes tools that your agent trusts implicitly.
Inkog is the first tool that audits MCP server configurations at the description level — not just checking if the server runs, but analyzing whether the tools it exposes are safe for your agent to use.
If you maintain MCP servers, Inkog can audit your tool descriptions before you publish. If you consume MCP servers, Inkog can vet them before you connect.
Try It
The static engine is live and free. Deep is in limited beta.
Book a walkthrough — We'll run both engines on your agent code live and show you what the full analysis looks like. Book a demo.
Already scanning with Inkog? — Request Deep beta access from your dashboard. Existing users get priority.
Beta participants get direct feedback channels with the team and preferential pricing when Deep moves to general availability.
Inkog combines deterministic static analysis with semantic security intelligence. See pricing or book a demo to run it on your agent code.