← All Checks critical LLM02

Hardcoded Secrets

AI coding agents frequently hardcode API keys, database credentials, and tokens directly into source files. When you say “connect to Stripe” or “set up the database,” your agent puts the key right in the code. VibSec scans for 20+ secret patterns across all file types.

What it catches

  • API keys (Stripe, AWS, OpenAI, Google, Twilio, SendGrid, etc.)
  • Database connection strings with passwords
  • Private keys and certificates (RSA, SSH, PGP)
  • .env files with production credentials
  • Hardcoded JWT secrets and signing keys
  • OAuth client secrets in source code
  • Firebase/Supabase service keys
  • Webhook secrets and auth tokens

Why vibe coders should care

Committed secrets end up in git history permanently. Even if you remove them in the next commit, they’re still in the repo. Bots scan GitHub for leaked keys within minutes of new commits.

Real impact:

  • A leaked Stripe secret key = unauthorized charges on your account
  • A leaked AWS key = crypto miners running on your account (people get $50K+ bills)
  • A leaked database URL = full access to all your user data
  • A leaked OpenAI key = someone runs up your API bill

Example

// ❌ VibSec flags this — AI agents do this ALL the time
const stripe = new Stripe('sk_live_abc123...');
const db = new Pool({ connectionString: 'postgres://admin:password123@db.example.com/prod' });

// ✅ Use environment variables
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!);
const db = new Pool({ connectionString: process.env.DATABASE_URL });

How to fix

  1. Move all secrets to environment variables or a secret manager
  2. Add .env to your .gitignore (VibSec checks this too)
  3. Rotate any keys that were ever committed — assume they’re compromised
  4. Run vibsec scan --fix and paste the prompt to your AI agent to auto-fix all occurrences

Related checks: Supply Chain Risks · System Prompt Leakage · All Checks

Feedback