Is yt-dlp safe?
https://clawhub.ai/yt-dlp-skill/yt-dlp
This skill is a straightforward wrapper around yt-dlp with no malicious code or prompt injection. However, it presents significant risk for autonomous agents due to yt-dlp's --cookies-from-browser flag (exposing all browser session tokens) and --exec flag (arbitrary command execution), both of which are passed through without restriction via the $@ argument forwarding in download.sh.
Category Scores
Findings (6)
HIGH Browser cookie access via --cookies-from-browser -35 ▶
The skill prominently documents and encourages use of yt-dlp's --cookies-from-browser flag, which reads the user's full browser cookie database. An LLM agent could be trivially prompted to use this flag, exposing session tokens for all authenticated websites (banking, email, social media, etc.) to the yt-dlp process.
HIGH Arbitrary command execution via yt-dlp --exec passthrough -30 ▶
The download.sh script passes all user arguments ($@) directly to yt-dlp. yt-dlp supports --exec and --exec-before-download flags that execute arbitrary shell commands after/before each download. An agent could be prompted to pass these flags, enabling full RCE.
MEDIUM Local .venv binary hijack vector -10 ▶
download.sh preferentially executes .venv/bin/yt-dlp if it exists. If an attacker can write to the skill's .venv directory (e.g., via another skill or a directory traversal), they can replace the yt-dlp binary with a malicious one.
LOW Unrestricted URL download capability -10 ▶
The skill can download content from any URL, which means an agent could be instructed to fetch content from internal network addresses or use yt-dlp as an SSRF vector to probe internal services.
INFO No prompt injection detected 0 ▶
SKILL.md contains no hidden instructions, persona hijacking, or attempts to manipulate agent behavior. The content is straightforward documentation of yt-dlp usage.
INFO Legitimate tool with dangerous capabilities when autonomous -45 ▶
This skill wraps a legitimate and popular media downloader, but the combination of arbitrary argument passthrough, cookie access, and --exec support creates significant risk when used by an autonomous LLM agent without human oversight.