EU AI Act Compliance Checklist for AI Agent Developers
The EU AI Act enforcement begins August 2, 2026. A practical checklist covering Article 14 (Human Oversight), Article 15 (Robustness), risk classification, and automated compliance monitoring with GitHub Actions.

The EU AI Act enforcement begins August 2, 2026 — roughly 155 days from now. If you build AI agents that operate in the EU market, this applies to you. Fines reach 35 million EUR or 7% of global annual turnover, whichever is higher.
This checklist covers what AI agent developers need to know and do before enforcement.
Does the EU AI Act Apply to Your AI Agent?
Not every AI application falls under the same rules. Here's a quick decision tree:
- Does your AI agent make or influence decisions that affect people? (hiring, lending, medical, legal, safety-critical) → You likely have a high-risk system under Annex III.
- Does your AI agent interact with people who may not know they're talking to AI? → You have transparency obligations regardless of risk level.
- Is your AI agent a general-purpose AI model? (e.g., a foundation model deployed as-is) → GPAI rules apply (Article 52+).
- None of the above? → You still have basic obligations (no prohibited practices, basic transparency).
Most production AI agents with tool-calling, autonomous decision-making, or multi-step reasoning fall into the high-risk category if they operate in regulated domains.
Article 14: Human Oversight Checklist
Article 14 requires that high-risk AI systems are designed to be effectively overseen by humans. For AI agents, this means:
Required Controls
- [ ] Human-in-the-loop for high-stakes operations — Financial transactions, data deletion, external communications, and destructive actions require explicit human approval before execution.
# Non-compliant: Agent executes financial actions autonomously
agent.run("Transfer $50,000 to account ending in 4521")
# Compliant: Human approval gate before execution
@require_human_approval(operations=["financial", "destructive"])
def execute_tool(tool_name, args):
if is_high_stakes(tool_name):
approval = request_human_review(tool_name, args)
if not approval.granted:
return "Action requires human approval"
return tool.execute(args)- [ ] Ability to interrupt the agent — Operators must be able to stop the agent mid-execution. This means implementing iteration limits, kill switches, and timeout mechanisms.
# Non-compliant: Unbounded agent loop
while not task_complete:
result = agent.step()
# Compliant: Bounded with operator override
agent = AgentExecutor(
agent=react_agent,
tools=tools,
max_iterations=25,
max_execution_time=300, # 5 minute timeout
early_stopping_method="force"
)-
[ ] Audit trail of agent decisions — Every tool call, LLM interaction, and decision point must be logged with timestamps, inputs, and outputs.
-
[ ] Transparency to end users — Users interacting with your agent must know they are interacting with AI, what the agent can do, and what its limitations are.
How to Verify
Run Inkog with the governance policy to check for missing oversight:
npx @inkog-io/cli scan . --policy governanceInkog maps findings directly to Article 14 requirements and flags agents that lack human approval gates, iteration bounds, or audit logging.
Article 15: Robustness Checklist
Article 15 requires high-risk AI systems to be resilient, accurate, and secure. For AI agents:
Required Controls
- [ ] Input validation — All user inputs to the agent must be validated before reaching the LLM or tools. This prevents prompt injection and data exfiltration.
# Non-compliant: Raw user input passed to LLM
response = llm.invoke(user_input)
# Compliant: Input sanitized before LLM call
sanitized = sanitize_input(user_input)
response = llm.invoke(sanitized)-
[ ] Output validation — Agent outputs must be validated before being returned to users or used in downstream operations. This prevents hallucinated data from propagating.
-
[ ] Error handling and graceful degradation — The agent must handle LLM failures, tool errors, and unexpected states without crashing or entering infinite loops.
-
[ ] Resource consumption limits — Token budgets, API call limits, and execution timeouts must be enforced to prevent token bombing and resource exhaustion.
# Non-compliant: No token limits
agent.run(task, callbacks=[])
# Compliant: Token budget enforced
agent.run(
task,
max_tokens=100000, # Total token budget
callbacks=[TokenBudgetCallback(limit=100000)]
)- [ ] Adversarial robustness — The agent must be tested against prompt injection, jailbreaking, and tool misuse attacks.
How to Verify
npx @inkog-io/cli scan . --policy eu-ai-actThis runs the full EU AI Act compliance profile, checking for both Article 14 and Article 15 requirements with findings mapped to specific articles.
Automated Compliance Monitoring
Don't wait for an audit. Run compliance checks on every pull request:
# .github/workflows/eu-ai-act-compliance.yml
name: EU AI Act Compliance Check
on: [pull_request]
jobs:
compliance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Inkog EU AI Act Scan
run: npx @inkog-io/cli scan . --policy eu-ai-act --output sarif > results.sarif
env:
INKOG_API_KEY: ${{ secrets.INKOG_API_KEY }}
- name: Upload SARIF to GitHub Security
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: results.sarifThis creates a continuous compliance record — every PR is checked, findings appear in GitHub's Security tab, and you have an audit trail of when issues were introduced and fixed.
Risk Classification for AI Agents
The EU AI Act classifies AI systems into risk tiers. Here's how common AI agent patterns map:
| Agent Pattern | Likely Risk Level | Key Articles | |---|---|---| | Customer support chatbot | Limited risk | Article 52 (transparency) | | Autonomous code execution agent | High risk | Articles 14, 15 | | Financial trading agent | High risk | Articles 14, 15, Annex III | | Medical triage agent | High risk | Articles 14, 15, Annex III | | Internal RAG assistant | Limited risk | Article 52 | | Multi-agent orchestrator with tool access | High risk | Articles 14, 15 | | Content generation (no decisions) | Minimal risk | Basic obligations only |
Penalties
The EU AI Act penalties are substantial:
- Prohibited practices (Article 5): Up to 35M EUR or 7% of global annual turnover
- High-risk non-compliance (Articles 14, 15): Up to 15M EUR or 3% of global annual turnover
- Incorrect information to authorities: Up to 7.5M EUR or 1% of global annual turnover
For SMEs and startups, fines are capped at the lower of the percentage or the fixed amount.
What to Do Now
- Classify your AI agents — Determine which risk tier each agent falls into
- Run a baseline scan —
npx @inkog-io/cli scan . --policy eu-ai-acton your current codebase - Add CI/CD checks — Set up the GitHub Actions workflow above to catch regressions
- Implement missing controls — Add human oversight gates, iteration limits, and audit logging where Inkog flags gaps
- Document your compliance — Keep scan results and remediation records for auditors
The EU AI Act is not optional if you serve EU users. Start now — August 2, 2026 is closer than it looks.
Inkog maps every finding to EU AI Act articles, NIST AI RMF functions, and OWASP LLM Top 10 categories. Run your first scan in 30 seconds:
npx @inkog-io/cli scan . --policy eu-ai-act