CrewAI Pre-Flight Check

CrewAI Agent Readiness

The pre-flight check for CrewAI applications. Detects delegation loops, stuck agents, and token consumption patterns in Crew, Agent, and Task workflows.

Common CrewAI Logic Flaws

Patterns that static analysis tools like linters don't catch.

Delegation Loops

Agents with allow_delegation=True can delegate tasks in circles, creating infinite loops

Missing max_iter

Agents without max_iter bounds can retry tasks indefinitely

Token Bombing

Multi-agent crews without cost limits accumulate API charges across all agents

Detection Patterns

CrewAI-specific detection patterns with code examples.

Delegation Loop - Circular Agent Delegation

CRITICAL

Multiple agents with allow_delegation=True can delegate to each other indefinitely.

Vulnerable
python
# Circular delegation between agents
researcher = Agent(
    role="Researcher",
    allow_delegation=True  # Can delegate to writer
)
writer = Agent(
    role="Writer",
    allow_delegation=True  # Can delegate back!
)
crew = Crew(agents=[researcher, writer], tasks=tasks)
Secure
python
# Unidirectional delegation chain
researcher = Agent(
    role="Researcher",
    allow_delegation=True,
    max_iter=5
)
writer = Agent(
    role="Writer",
    allow_delegation=False  # End of chain
)
crew = Crew(agents=[researcher, writer], tasks=tasks)

Missing Iteration Bounds

HIGH

Agent without max_iter retries failed tasks indefinitely.

Vulnerable
python
agent = Agent(
    role="Data Analyst",
    goal="Analyze the dataset",
    backstory="Expert data analyst",
    # No max_iter set - retries forever on failure
)
Secure
python
agent = Agent(
    role="Data Analyst",
    goal="Analyze the dataset",
    backstory="Expert data analyst",
    max_iter=5,  # Stop after 5 attempts
    max_rpm=10,  # Rate limit
)

Unvalidated Tool Output

HIGH

Tool results passed to next agent without validation.

Vulnerable
python
# Tool output goes directly to next agent
@tool
def query_database(sql: str) -> str:
    """Run SQL query and return results"""
    return db.execute(sql)  # No validation!

analyst = Agent(tools=[query_database])

LLM-generated SQL could contain injection payloads passed between agents

Getting Started

Run Inkog against your CrewAI codebase.

1

Run the scanner

bash
npx -y @inkog-io/cli scan ./my-crewai-app
2

Review findings

Inkog traces data flow through your CrewAI code and reports issues with severity levels and line numbers.

3

Address issues

Apply the suggested fixes based on severity and re-scan to verify.

CrewAI Compliance Reports

Automated mapping to global AI governance frameworks.

EU AI Act

Article 14, 15, 12

NIST AI RMF

MAP/MEASURE/MANAGE

OWASP LLM

Top 10 Coverage

ISO 42001

AI Management

CrewAI Readiness FAQ

Does Inkog support CrewAI?

Yes. Inkog has a dedicated CrewAI adapter that understands Agent, Task, Crew, and delegation patterns. It detects delegation loops, missing iteration bounds, and tool security issues.

How do I fix a CrewAI delegation loop?

Set allow_delegation=False on at least one agent in each chain. Use max_iter to bound retries. Inkog identifies which agents form cycles.

Does Inkog scan CrewAI tools?

Yes. Inkog analyzes @tool decorated functions for security issues including code execution, SQL injection, and file system access patterns.

Scan Your CrewAI Application

Free tier available. No credit card required.