<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Inkog Labs — AI Agent Security Research</title>
    <link>https://inkog.io/labs</link>
    <atom:link href="https://inkog.io/labs/rss.xml" rel="self" type="application/rss+xml"/>
    <description>Security research, technical deep-dives, and insights on AI agent security from the Inkog team.</description>
    <language>en-us</language>
    <lastBuildDate>Tue, 21 Apr 2026 00:00:00 GMT</lastBuildDate>
    <pubDate>Tue, 05 May 2026 09:07:29 GMT</pubDate>
    <item>
      <title>The Vercel Breach: Anatomy of an AI Tool Supply Chain Attack</title>
      <link>https://inkog.io/labs/vercel-breach-ai-supply-chain</link>
      <guid isPermaLink="true">https://inkog.io/labs/vercel-breach-ai-supply-chain</guid>
      <pubDate>Tue, 21 Apr 2026 00:00:00 GMT</pubDate>
      <description>A compromised AI tool called Context.ai gave attackers access to Vercel customer credentials through a single OAuth grant. Full attack chain, timeline, and response checklists for developers, security teams, and CISOs.</description>
      <author>hello@inkog.io (Ben)</author>
      <category>security</category>
    </item>
    <item>
      <title>What 561 Repositories Taught Us About AI Agent Security</title>
      <link>https://inkog.io/labs/561-repos-ai-agent-security</link>
      <guid isPermaLink="true">https://inkog.io/labs/561-repos-ai-agent-security</guid>
      <pubDate>Fri, 10 Apr 2026 00:00:00 GMT</pubDate>
      <description>We scanned 561 open-source AI agent repositories, drove the false-positive rate to zero, and opened a disclosure pipeline. Here is what we learned — methodology, top patterns, and the raw numbers.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>research</category>
    </item>
    <item>
      <title>Building Secure AI Agents with Claude Code and the Inkog MCP</title>
      <link>https://inkog.io/labs/building-secure-agents-with-claude-code</link>
      <guid isPermaLink="true">https://inkog.io/labs/building-secure-agents-with-claude-code</guid>
      <pubDate>Fri, 10 Apr 2026 00:00:00 GMT</pubDate>
      <description>A walkthrough of the dev-flow security loop: build an agent in Claude Code, scan it with the Inkog MCP, explain and fix findings in the same conversation.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>best-practices</category>
    </item>
    <item>
      <title>Why Multi-Agent Communication in CrewAI Needs Authentication</title>
      <link>https://inkog.io/labs/crewai-multi-agent-authentication</link>
      <guid isPermaLink="true">https://inkog.io/labs/crewai-multi-agent-authentication</guid>
      <pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate>
      <description>We analyzed CrewAI&apos;s delegation system and found unsigned agent-to-agent communication. Here&apos;s why multi-agent workflows need message authentication.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>vulnerabilities</category>
    </item>
    <item>
      <title>The AI Agent Security Gap: Findings from Scanning 500+ Open-Source AI Agent Projects</title>
      <link>https://inkog.io/labs/ai-agent-security-gap-2026</link>
      <guid isPermaLink="true">https://inkog.io/labs/ai-agent-security-gap-2026</guid>
      <pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate>
      <description>We scanned 500+ open-source AI agent repositories for security vulnerabilities. The results reveal a systemic gap between AI agent adoption and AI agent security.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>research</category>
    </item>
    <item>
      <title>Promptfoo Alternative: What the OpenAI Acquisition Means for AI Agent Security</title>
      <link>https://inkog.io/labs/promptfoo-alternative-for-ai-agent-security</link>
      <guid isPermaLink="true">https://inkog.io/labs/promptfoo-alternative-for-ai-agent-security</guid>
      <pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
      <description>OpenAI acquired Promptfoo. What changes for teams that need vendor-independent AI agent security, and how static analysis fills the gaps eval frameworks leave open.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>security</category>
    </item>
    <item>
      <title>Introducing Inkog Deep: Semantic Security Analysis for AI Agents</title>
      <link>https://inkog.io/labs/introducing-inkog-deep</link>
      <guid isPermaLink="true">https://inkog.io/labs/introducing-inkog-deep</guid>
      <pubDate>Sun, 08 Mar 2026 00:00:00 GMT</pubDate>
      <description>Inkog Deep goes beyond pattern matching — it understands your agent&apos;s purpose, maps its architecture, and explains why findings matter.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>security</category>
    </item>
    <item>
      <title>EU AI Act Compliance Checklist for AI Agent Developers</title>
      <link>https://inkog.io/labs/eu-ai-act-agent-compliance-checklist</link>
      <guid isPermaLink="true">https://inkog.io/labs/eu-ai-act-agent-compliance-checklist</guid>
      <pubDate>Sat, 28 Feb 2026 00:00:00 GMT</pubDate>
      <description>The EU AI Act enforcement begins August 2, 2026. A practical checklist covering Article 14 (Human Oversight), Article 15 (Robustness), risk classification, and automated compliance monitoring with GitHub Actions.</description>
      <author>hello@inkog.io (Ben)</author>
      <category>compliance</category>
    </item>
    <item>
      <title>Why AI Code Review Is Not Security Scanning</title>
      <link>https://inkog.io/labs/why-ai-code-review-is-not-security-scanning</link>
      <guid isPermaLink="true">https://inkog.io/labs/why-ai-code-review-is-not-security-scanning</guid>
      <pubDate>Sat, 28 Feb 2026 00:00:00 GMT</pubDate>
      <description>Claude, Copilot, and Cursor are great at reviewing code — but they are not security scanners. Six gaps between AI code review and automated security scanning for AI agents.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>security</category>
    </item>
    <item>
      <title>NIST Launches AI Agent Standards Initiative: What Security Teams Need to Know</title>
      <link>https://inkog.io/labs/nist-ai-agent-standards-initiative</link>
      <guid isPermaLink="true">https://inkog.io/labs/nist-ai-agent-standards-initiative</guid>
      <pubDate>Sun, 22 Feb 2026 00:00:00 GMT</pubDate>
      <description>NIST announced the AI Agent Standards Initiative on February 18, 2026 — the first US federal effort targeting AI agent governance. Key deadlines, pillars, and how to prepare.</description>
      <author>hello@inkog.io (Ben)</author>
      <category>compliance</category>
    </item>
    <item>
      <title>Prompt Injection Defense Patterns for Production AI Agents</title>
      <link>https://inkog.io/labs/prompt-injection-defense-patterns</link>
      <guid isPermaLink="true">https://inkog.io/labs/prompt-injection-defense-patterns</guid>
      <pubDate>Sat, 07 Dec 2024 00:00:00 GMT</pubDate>
      <description>A practical guide to detecting and preventing prompt injection attacks in LLM-powered applications, with code examples and YAML rules.</description>
      <author>hello@inkog.io (Inkog Team)</author>
      <category>security</category>
    </item>
  </channel>
</rss>