Hardcoded Credentials in AI Applications
Hardcoded credentials in AI applications refers to API keys, tokens, passwords, or secrets embedded directly in source code rather than loaded from environment variables or secret managers. In AI agent code, this most commonly involves OpenAI API keys, database passwords, and service tokens.
Exposed API Key
from langchain.llms import OpenAI
# API key hardcoded in source code
llm = OpenAI(api_key="sk-proj-abc123def456...")
agent = create_react_agent(llm, tools)import os
from langchain.llms import OpenAI
# Loaded from environment variable
llm = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
agent = create_react_agent(llm, tools)Frequently Asked Questions
Why are hardcoded credentials especially dangerous in AI agent code?
AI agent code often includes high-value API keys (OpenAI, Anthropic, etc.) that can cost thousands of dollars if leaked. Additionally, AI repos are frequently shared as examples or demos, increasing the chance of credential exposure through GitHub.
What types of credentials does Inkog detect?
Inkog detects hardcoded OpenAI API keys, Anthropic keys, AWS credentials, database passwords, JWT secrets, PEM private keys, and 30+ other credential patterns. Detection runs locally — credentials never leave your machine.
How do I fix hardcoded credentials in my AI agent?
Replace hardcoded values with environment variables: os.environ["OPENAI_API_KEY"]. Use .env files with python-dotenv for local development and secret managers (AWS Secrets Manager, Vault) for production.
How Inkog Detects This
Inkog runs local pattern-matching on your source code to detect hardcoded API keys, tokens, and passwords. Detection happens client-side — your credentials are redacted before any code is sent to the analysis server. Supports 30+ credential patterns.
npx -y @inkog-io/cli scan .Scan for Exposed Secrets
Scan your AI agents for vulnerabilities. Free for developers.
Start Free Scan