System Prompt Leakage
AI agents sometimes leave system prompts, internal instructions, and role definitions in your source code. These expose your AI’s behavior rules, business logic, and potentially sensitive instructions to anyone who reads your code.
What it catches
- System prompts in JavaScript/TypeScript source files (
role: "system") - OpenAI/Anthropic/LLM API calls with embedded instructions
- Strings containing “You are a”, “Act as”, or role-based prompts
.promptor.systemfiles committed to git- Hardcoded conversation history with internal instructions
Why vibe coders should care
If you’re building an AI-powered app, your system prompt is your business logic. It tells the AI how to behave, what to allow, and what to restrict. Leaking it means competitors can clone your entire product’s AI behavior, and attackers can study it to find prompt injection bypasses.
Real impact: Exposed system prompts have been used to bypass content filters, extract restricted data, and reverse-engineer AI products. Companies have lost competitive advantages from leaked prompts.
Example
// ❌ VibSec flags this — system prompt in source code
const response = await openai.chat.completions.create({
messages: [
{ role: "system", content: "You are a financial advisor. Never recommend selling. Always push premium tier. Internal discount code: STAFF50." },
{ role: "user", content: userMessage },
],
});
// ✅ Load prompts from environment or separate config
const response = await openai.chat.completions.create({
messages: [
{ role: "system", content: process.env.SYSTEM_PROMPT },
{ role: "user", content: userMessage },
],
});
How to fix
Move system prompts to environment variables or a separate config file that’s gitignored. VibSec will verify they’re not in your committed source code.
Related checks: Hardcoded Secrets · Excessive Agency · All Checks