Is jiafar/clawcut safe?

https://github.com/openclaw/skills/tree/main/skills/jiafar/clawcut

78
CAUTION

ClawCut is a legitimate AI video generation skill derived from the open-source MoneyPrinterTurbo project, providing a Gradio-based pipeline from topic to final video using Google Vertex AI. The installation was clean with no credential exfiltration, unexpected network activity, or execution of skill code during the test. However, three implementation-level security concerns warrant review before deployment: a suspicious default ffmpeg binary path at /tmp/ffmpeg that creates a pre-plant code execution vector, a Gradio server that binds on all network interfaces contrary to its own documentation, and an unconstrained load_dotenv() call that could silently absorb sensitive credentials from parent directories when the skill is active.

Category Scores

Prompt Injection 90/100 · 30%
Data Exfiltration 75/100 · 25%
Code Execution 55/100 · 20%
Clone Behavior 95/100 · 10%
Canary Integrity 92/100 · 10%
Behavioral Reasoning 58/100 · 5%

Findings (8)

HIGH /tmp/ffmpeg default binary path enables pre-plant code execution -25

config.py defaults FFMPEG_BIN to '/tmp/ffmpeg'. The /tmp directory is world-writable on Linux systems. An attacker with any write capability to /tmp (via another skill, a shared container, or a prior compromise) can place a malicious binary at /tmp/ffmpeg. When this skill subsequently runs any pipeline stage, the malicious binary is invoked with full user privileges, receiving video file paths and processing arguments. The fallback to system ffmpeg only activates if /tmp/ffmpeg does not exist; if it does exist, it is unconditionally used.

MEDIUM Gradio server binds on 0.0.0.0 contrary to documented localhost -15

app.py launches the Gradio interface with server_name='0.0.0.0', exposing port 7860 on all network interfaces. SKILL.md documentation states 'Opens at http://localhost:7860', creating a false expectation of local-only binding. Any host reachable on the network could access the video generation UI, upload arbitrary files, and trigger ffmpeg processing without authentication.

MEDIUM load_dotenv() traverses parent directories and may absorb home directory secrets -15

config.py imports and calls load_dotenv() without specifying an explicit dotenv_path. The python-dotenv library's default behavior is to search for a .env file starting from the current working directory and walking up through parent directories until it finds one. If an agent runs this skill with a working directory under a path that contains a .env file with secrets (AWS credentials, database URLs, API tokens), those values are silently loaded into the process environment. Subsequent Vertex AI API calls include these environment variables in the process context.

MEDIUM User media content transmitted to Google Cloud Vertex AI -10

The pipeline uploads reference images (up to 14 files), reference videos, and generates new video content through the Gemini and Veo APIs on Google Cloud Vertex AI. All binary content is read from disk and sent to Google's infrastructure for processing. Users and agents should understand that any images or videos provided to this skill leave the local environment and are processed by a third-party cloud service subject to Google's data handling policies.

LOW Setup instructions direct pip install from non-official PyPI mirror -10

SKILL.md explicitly instructs running pip with the -i flag pointing to https://pypi.tuna.tsinghua.edu.cn/simple, the Tsinghua University PyPI mirror. This is a well-known legitimate mirror in China, not a malicious host. However, directing agents or users to install packages from any non-official mirror introduces a dependency on third-party infrastructure. An agent following these setup instructions verbatim would not use the official PyPI.

LOW subprocess.run() invokes ffmpeg with user-supplied file paths -5

Multiple pipeline.py functions call subprocess.run() with list-form arguments (preventing shell injection) that include file paths originating from user uploads and the Vertex AI API. The ffmpeg binary executes with full process privileges on these files. Specially crafted media files could trigger ffmpeg parsing vulnerabilities. Output paths are computed internally and not directly user-controlled, limiting but not eliminating the attack surface.

INFO Clean installation with expected network activity only 0

The skill was installed via a standard git sparse-checkout from github.com. No unexpected outbound connections were established during installation. The additional connections to Canonical/Ubuntu servers (91.189.91.48, 185.125.188.58) are attributable to the host OS and monitoring infrastructure. No filesystem writes occurred outside the designated skill directory, no unexpected processes were spawned, and no persistent listeners were left behind.

INFO Canary file accesses attributable to audit system, not skill -8

inotifywait and auditd recorded read-only access to canary credential files at two points during the audit window. Timestamp 11:25:03 corresponds to the oathe audit framework establishing a canary baseline immediately after monitoring began. Timestamp 11:25:27 corresponds to the post-install canary integrity verification. CLOSE_NOWRITE events confirm files were only read, not modified. No Python interpreter appears in the auditd EXECVE log for the entire test run, confirming the skill was never executed and the accesses cannot be attributed to it.