Deserialization & Data Handling
AI agents use pickle.load(), yaml.load(), unserialize(), and JSON.parse() on untrusted data without a second thought. Unsafe deserialization is one of the most dangerous vulnerabilities — it can lead to remote code execution with a single malicious payload.
What it catches
- Python
pickle.load()/pickle.loads()on untrusted data yaml.load()withoutSafeLoader(YAML deserialization attacks)- PHP
unserialize()with user input - Java
ObjectInputStream.readObject()patterns eval(JSON.parse(...))chainsmarshal.load()andshelve.open()with external data
Why vibe coders should care
When your AI agent writes pickle.load(request.data), anyone who sends a crafted pickle payload can execute arbitrary code on your server. This isn’t a theoretical attack — pickle deserialization exploits are widely available and trivial to use. Same goes for YAML bomb attacks and PHP object injection.
Real impact: A single pickle.load() on untrusted data = full server compromise. An attacker sends a crafted binary blob and gets a reverse shell.
Example
# ❌ VibSec flags this — Remote Code Execution via pickle
import pickle
data = pickle.loads(request.body)
# ✅ Use JSON for data exchange
import json
data = json.loads(request.body)
# ❌ YAML bomb / code execution
import yaml
config = yaml.load(user_input)
# ✅ Use SafeLoader
config = yaml.safe_load(user_input)
How to fix
Replace pickle/marshal with JSON for data exchange. Use yaml.safe_load() instead of yaml.load(). VibSec’s --fix mode generates the exact replacements for your AI agent to apply.
Related checks: Unsafe Code Patterns · Injection Prevention · All Checks