Is buxibuxi/stock-copilot-pro safe?
https://github.com/openclaw/skills/tree/main/skills/buxibuxi/stock-copilot-pro
Stock Copilot Pro is a well-structured financial analysis skill with no malicious code, no credential theft, clean installation behavior, and all canary honeypots intact. The primary risk is architectural rather than adversarial: auto_invoke combined with a design that routes all user portfolio data, holdings, and stock queries through the single third-party qveris.ai creates significant financial privacy exposure. Users should treat this skill as implicitly sharing their investment activity with qveris.ai on an ongoing basis.
Category Scores
Findings (8)
MEDIUM auto_invoke: true enables unsolicited skill activation -10 ▶
The skill declares auto_invoke: true in its SKILL.md frontmatter. This causes the OpenClaw agent to automatically invoke the skill whenever it detects stock-related queries, without requiring explicit user invocation. This creates a persistent activation vector: the skill can trigger qveris.ai API calls and portfolio data transmission without the user deliberately requesting stock analysis.
MEDIUM All user financial and portfolio data routed through third-party qveris.ai -15 ▶
Every stock query, portfolio holding (read from config/watchlist.json), morning/evening brief, X sentiment request, and industry radar query is proxied through qveris.ai. This single third-party accumulates a complete profile of the user's investment portfolio, trading intent, and financial decision patterns. The skill's design is transparent about this but the privacy implications for financial data are significant.
LOW full_content_file_url fetch restriction is documentation-only claim -7 ▶
SKILL.md states that full_content_file_url fetching is restricted to HTTPS URLs under qveris.ai to prevent arbitrary outbound requests. This is a security claim documented in SKILL.md and README.md but the actual runtime enforcement lives in scripts/lib/data/fetcher.mjs which was not available for content review in the provided evidence. The claim cannot be independently verified from the evidence collected.
LOW Agent is instructed to directly execute local Node.js skill scripts -15 ▶
SKILL.md's Command Surface section explicitly instructs the agent to invoke node scripts/stock_copilot_pro.mjs with user-controlled arguments. The agent runs local code from the skill package with full Node.js runtime capabilities. No malicious install hooks were detected and the scripts appear legitimate, but this execution model gives the skill package runtime code execution authority via the agent.
LOW Evolution state creates growing persistent local data store -10 ▶
The skill writes execution metadata, tool parameter templates, and successful call parameters to .evolution/tool-evolution.json during every successful run. While the skill documents that API keys and raw payloads are excluded, this file accumulates user query patterns over time and persists across sessions. The file has bounded maximum size per documentation but grows with usage.
LOW Cron configuration enables persistent unattended portfolio monitoring with external delivery -12 ▶
The skill ships config/openclaw-cron.example.json with three scheduled job templates: morning brief at 09:00, evening brief at 17:00, and daily radar at 08:30 (all Asia/Shanghai). Each job sends portfolio holdings data to qveris.ai and delivers results to external channels (Feishu). Once configured, this establishes continuous automated transmission of financial data without per-session user consent.
INFO Canary file accesses are audit system operations, not skill behavior 0 ▶
Auditd PATH records show access to .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and .config/gcloud credentials at two points: during audit initialization (pre-install, ~1771736111) and post-install verification (~1771736130). Timing analysis places both access clusters within the audit orchestration process, not within any skill-originated process. The canary integrity report confirms no exfiltration occurred.
INFO Installation was clean with fully expected behavior 0 ▶
The install process performed a sparse git clone from the declared source repository, copied only the specified skill subdirectory, and cleaned up the temporary clone. All network activity was accounted for by the git clone (GitHub) and background system processes (Ubuntu Canonical servers). No new listening ports, no unexpected persistent connections, and no filesystem changes outside the skill install path.