← All Checks high LLM05

Unsafe Code Patterns

AI agents frequently generate code with eval(), exec(), unsanitized template literals, and dynamic code execution — dangerous patterns that open your app to remote code execution and injection attacks.

What it catches

  • eval() with variables or user input
  • new Function() constructors with dynamic strings
  • child_process.exec() with unsanitized arguments
  • setTimeout/setInterval with string arguments
  • Python exec() and compile() with dynamic input
  • subprocess.shell=True with user-controlled data

Why vibe coders should care

When you ask your AI agent to “parse this data” or “run this command,” it often reaches for eval/exec as the simplest solution. These patterns let attackers run any code they want on your server. One eval(userInput) is all it takes for someone to read your database, your environment variables, or wipe your server.

Real impact: An eval() in a Node.js API endpoint means anyone who sends a crafted request can execute arbitrary JavaScript on your server — read files, exfiltrate data, or install crypto miners.

Example

// ❌ VibSec flags this — AI agents generate this all the time
const result = eval(req.body.expression);

// ❌ Also dangerous
exec(`git clone ${userInput}`);

// ✅ Use safe alternatives
const result = JSON.parse(req.body.data);

// ✅ Use execFile with explicit args
execFile('git', ['clone', repoUrl]);

How to fix

Run vibsec scan --fix and paste the output to your AI agent. It will replace eval() with JSON.parse(), exec() with execFile(), and sanitize all dynamic inputs automatically.

Related checks: Injection Prevention · Deserialization · All Checks

Feedback