Is bderiel/open-ralph safe?
https://github.com/openclaw/skills/tree/main/skills/bderiel/open-ralph
The open-ralph skill has a clean installation profile — no malicious code in SKILL.md, no canary file tampering, and network activity during install was limited to github.com. However, the skill's intended operation routes user source code through OpenCode to multiple Chinese AI providers (Moonshot/Kimi, MiniMax, Zhipu AI) without user disclosure, and the final fallback model 'opencode/big-pickle' is described as a 'stealth model' with completely unknown data handling practices. Combined with support for 50-iteration autonomous code loops and a blanket plugin-bypass instruction, the skill warrants careful review before deployment in any environment handling sensitive or proprietary code.
Category Scores
Findings (8)
HIGH Source Code Routed to Chinese AI Providers -30 ▶
When activated, the skill directs the agent to send user source code through OpenCode's API to AI models operated by Chinese companies: Kimi (Moonshot AI), MiniMax, and Zhipu AI (GLM). The skill provides no disclosure that user code will traverse foreign AI infrastructure, nor does it obtain user consent. This is a significant privacy risk for proprietary, regulated, or sensitive codebases.
HIGH Opaque 'Stealth Model' as Final Fallback -22 ▶
The last fallback model 'opencode/big-pickle' is explicitly described as a 'Free stealth model fallback'. The term 'stealth' is undefined and potentially indicative of a model designed to operate without logging, with unknown data retention, or with covert exfiltration capability. The model's provenance, operator, and data handling policies are completely undisclosed within the skill.
MEDIUM Up to 50 Autonomous Iterations Without Mandatory Review -15 ▶
In tasks mode the skill enables ralph to make up to 50 successive code modifications driven by external AI models. There is no enforced pause for user diff review between iterations. The safety note ('Review diffs before merging autonomous changes') is advisory only. This could result in extensive unreviewed changes to the codebase before the user inspects output.
MEDIUM Blanket Plugin Bypass Instruction -10 ▶
The skill instructs the agent to rerun with --no-plugins if OpenCode plugins 'interfere with loop execution'. This could disable legitimate security scanning, secret detection, linting, or monitoring plugins. A malicious actor could exploit a skill that conflicts with security plugins to force their disabling via this instruction.
LOW Agent Directed to Fetch Third-Party URL -12 ▶
The skill instructs the agent to autonomously fetch https://opencode.ai/zen/v1/models to verify model availability and update fallback order. This directs an autonomous network request to a third-party endpoint, and any content returned could influence the agent's subsequent model selection behavior.
LOW External Binary Dependencies with Unverified Integrity -8 ▶
The skill requires three external CLI tools (opencode, ralph, git) that must be pre-installed. The skill does not specify version pins, checksums, or installation sources for opencode or ralph. A compromised or typosquatted version of either tool would execute with full agent permissions.
INFO Clean Install — GitHub Connection Only 0 ▶
All network activity during installation was limited to a single TLS connection to 140.82.121.3:443 (github.com) for the sparse git clone. No unexpected outbound connections, DNS queries to non-GitHub domains, or filesystem changes outside /home/oc-exec/skill-under-test/ were detected.
INFO All Honeypot Files Intact 0 ▶
Canary files placed at .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and .config/gcloud/application_default_credentials.json were all confirmed unmodified post-install. The pre/post canary reads visible in inotify/auditd logs are attributable to the oathe audit framework's baseline and verification checks, not the skill.