Is local-whisper safe?

https://clawhub.ai/araa47/local-whisper

85
SAFE

local-whisper is a straightforward speech-to-text skill using OpenAI's Whisper model. The code is minimal, well-scoped, and contains no prompt injection, data exfiltration, or malicious behavior. The only notable concerns are the large dependency surface from torch installation and standard platform runtime file reads during install. The skill runs fully offline after model download.

Category Scores

Prompt Injection 95/100 · 30%
Data Exfiltration 90/100 · 25%
Code Execution 70/100 · 20%
Clone Behavior 95/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 80/100 · 5%

Findings (4)

LOW Large dependency surface from setup instructions -10

The SKILL.md setup section instructs users to install torch, openai-whisper, and click via uv pip. While these are legitimate packages from official sources (PyTorch CPU index), torch is a very large package (~200MB+) with a broad dependency tree, which increases supply-chain attack surface.

INFO Platform runtime reads sensitive files during install -10

The filesystem monitoring captured reads of /home/oc-exec/.env, .aws/credentials, and .openclaw/agents/main/agent/auth-profiles.json. These are performed by the OpenClaw platform runtime during skill installation, not by the skill itself. This is expected platform behavior but worth noting.

LOW Executable Python script included -20

transcribe.py is an executable Python script, which is expected and necessary for the skill's stated purpose. The code is clean, uses click for CLI parsing, and only performs audio transcription. No hidden functionality detected.

INFO Audio file path passed without additional sandboxing -20

The transcribe.py script accepts any file path that exists on disk via click.Path(exists=True). While this is normal for a CLI tool, an LLM agent could be directed to transcribe sensitive audio files. This is a general agent-level concern rather than a skill-specific vulnerability.