Is minduploadedcrab/content-automator safe?

https://github.com/openclaw/skills/tree/main/skills/minduploadedcrab/content-automator

79
CAUTION

The content-automator skill is a functional YouTube content pipeline for faceless trading-update channels with no prompt injection attempts and a clean install profile. Its primary risk is architectural: it accepts arbitrary filesystem paths as portfolio input and routes the extracted content through a third-party TTS API, creating an indirect exfiltration channel that could be exploited if the agent is separately manipulated. The documentation/implementation gap around the undocumented 'news' command referencing an agent 'colony' source is a secondary concern requiring monitoring.

Category Scores

Prompt Injection 82/100 · 30%
Data Exfiltration 65/100 · 25%
Code Execution 78/100 · 20%
Clone Behavior 95/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 65/100 · 5%

Findings (8)

HIGH Arbitrary file path accepted as --portfolio; indirect exfiltration via TTS pipeline -35

The --portfolio CLI argument accepts any readable filesystem path. The parse_portfolio_data function applies a broad regex to extract word-number pairs from whatever file is provided, then embeds the results in a TTS script transmitted to ElevenLabs and potentially published publicly. An agent manipulated by a separate prompt injection could be directed to supply a sensitive file (SSH keys, cloud credentials, environment files) as the portfolio argument, causing its contents to flow through an external API and onto YouTube.

MEDIUM Private financial portfolio data transmitted to third-party ElevenLabs API -15

The skill reads local financial data (portfolio value, positions, P&L) and sends it to api.elevenlabs.io for text-to-speech conversion. While declared in SKILL.md's Security Notes, the data leaves the user's system to a third-party service and the resulting audio is intended for public YouTube upload. Users with confidential trading positions could unknowingly publish them.

MEDIUM Unsanitized title injected into ffmpeg drawtext filter -12

In assemble_video(), the raw title parameter is interpolated directly into the ffmpeg -vf drawtext filter string. In the cmd_script path, args.title comes from user input and is not sanitized before this use. A title containing single quotes or ffmpeg filter syntax metacharacters could malform the filter graph or, in edge cases, manipulate ffmpeg's behavior. Note: safe_title sanitization is applied only to the output filename, not to the filter argument.

MEDIUM Documentation/implementation mismatch: 'news' command documented but not implemented -18

SKILL.md documents a 'news' subcommand with a --sources flag accepting 'twitter,colony' as data sources. No corresponding subparser or cmd_news function exists in content_automator.py. An agent following the SKILL.md instructions to generate a news summary would receive an argparse error. More significantly, the mention of 'colony' as a source suggests intended future integration with the OpenClaw agent network, an undisclosed dependency that would dramatically expand network reach if implemented.

LOW Skill constitutes private-to-public data pipeline by design -15

The skill's stated purpose is to take private financial workspace data and publish it publicly as YouTube videos. For users who misunderstand the output scope or whose agents autonomously invoke the skill, this represents a privacy risk. The combination of automatic portfolio parsing and public video upload means confidential trading strategies could be permanently published.

LOW Subprocess execution of ffmpeg and ffprobe declared but unconstrained -5

The skill shells out to ffmpeg and ffprobe via subprocess.run with check=True and capture_output=True. Arguments are passed as lists (not shell strings), preventing shell injection. However, the skill does not validate that ffmpeg/ffprobe are the legitimate system binaries, and the ffmpeg binary path is not pinned, leaving it dependent on PATH resolution.

INFO Clean sparse-checkout install from GitHub with no persistent connections 0

The install cloned the monorepo with --depth 1 and sparse-checkout, fetching only the skill subdirectory. No new listeners or established connections persisted after installation. The connection diff confirms the network state returned to baseline.

INFO Canary file accesses attributable to audit framework, not skill 0

Filesystem events show accesses to .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and gcloud credentials at timestamps before (1771918831) and after (1771918855) skill installation. These correspond to the oathe monitoring framework's own pre- and post-install canary integrity verification passes. All accesses are read-only (CLOSE_NOWRITE). The skill's Python code contains no logic to access these paths.