Is heldinhow/freeride-opencode safe?

https://github.com/openclaw/skills/tree/main/skills/heldinhow/freeride-opencode

85
SAFE

The freeride-opencode skill is documentation-only configuration content with no executable code, no prompt injection attempts, and no embedded data exfiltration mechanisms. The primary concerns are platform-level: the openclaw-gateway process read all credential canary files and established new outbound connections to AWS endpoints immediately after skill installation, suggesting platform infrastructure behavior triggered by skill registration rather than skill-specific malice. Additionally, the skill routes user AI traffic through multiple Chinese AI providers (MiniMax, Moonshot/Kimi, Zhipu/GLM) with different data residency characteristics, and references a model identifier (opencode/gpt-5-nano) that does not correspond to any known OpenAI product.

Category Scores

Prompt Injection 93/100 · 30%
Data Exfiltration 74/100 · 25%
Code Execution 96/100 · 20%
Clone Behavior 76/100 · 10%
Canary Integrity 83/100 · 10%
Behavioral Reasoning 74/100 · 5%

Findings (7)

MEDIUM Credential files accessed post-installation by platform process -17

All six honeypot credential files (.env, id_rsa, .aws/credentials, .npmrc, docker config, gcloud credentials) were opened at Unix timestamp 1771909311.814, approximately 4 seconds after skill installation completed. While canary integrity monitoring reports no exfiltration, the files being opened is anomalous and correlates with openclaw-gateway establishing new external connections to AWS endpoints immediately after install.

MEDIUM openclaw-gateway establishes new outbound connections to unidentified AWS endpoints post-install -24

After skill installation, the openclaw-gateway process (pid=1088) opened new persistent TCP connections to 98.83.99.233:443 and 34.233.6.177:443 (both AWS EC2, US-East region) and new localhost listeners on ports 18790/18793. These connections were not present in the pre-install baseline. While the gateway is pre-existing platform infrastructure, its state change correlates with the skill install event.

LOW Skill routes AI traffic through multiple Chinese AI providers -9

The skill configures MiniMax M2.1 (MiniMax, China), Kimi K2.5 (Moonshot AI, China), and GLM 4.7 (Zhipu AI, China) as primary and fallback models. User prompts sent to these providers may be subject to Chinese data retention laws (PIPL, Cybersecurity Law) and may be stored or processed outside GDPR/CCPA jurisdictions.

LOW Model identifier 'opencode/gpt-5-nano' does not correspond to known OpenAI model -13

The skill references opencode/gpt-5-nano as a heartbeat/lightweight model, but 'GPT 5 Nano' is not a documented OpenAI model variant. This may be a platform-specific alias, a future model, or a spoofed endpoint that routes traffic to an unintended destination. The model is promoted for high-frequency heartbeat use which would maximize query volume.

LOW Skill requires storing two plaintext API keys in local config file -5

The skill instructs users to place OpenCode Zen API key and OpenRouter API key in ~/.openclaw/openclaw.json. If the openclaw-gateway process reads this file and connects to external servers (as evidenced in clone behavior), these keys could be exposed.

INFO Development planning documents (TASKS.md, SPECIFICATION.md, PLAN.md) shipped in production skill -2

The skill package includes internal development planning documents that were not stripped before publication. These files describe the skill's version history and internal architecture but contain no executable content and pose no security risk.

INFO .clawhub/lock.json references unrelated skill academic-research-hub -2

The skill's lock.json records a previously installed skill 'academic-research-hub' in the same environment. This indicates shared environment state between different skills and could allow cross-skill interaction if the environment is not properly isolated.