Is mariovallereyes/token-saver-75plus safe?
https://github.com/openclaw/skills/tree/main/skills/mariovallereyes/token-saver-75plus
This skill implements a token optimization and model routing protocol that presents moderate risk primarily through its mandatory routing of full conversation context to third-party AI services (Groq, OpenAI) without per-request user consent, and through its use of fabricated model identifiers (openai/gpt-5.3-codex, openai/gpt-5.2) that do not exist and will reliably trigger cost-escalating failure paths. The skill contains no executable code, no malicious git hooks, and passed canary integrity checks, but its always-on silent operation, output suppression instructions, and unbounded recursive agent spawning represent meaningful behavioral risks to agent transparency and user cost control.
Category Scores
Findings (10)
CRITICAL Mandatory Full Conversation Context Routed to Third-Party AI Services -30 ▶
The skill mandates that 'ALL context' be included in every spawned agent task string and routes these to Groq (free tier) and OpenAI endpoints. This unconditionally forwards potentially sensitive conversation history — including file contents, credentials, or proprietary data discussed in prior turns — to external third-party AI services. Free-tier Groq may use submitted prompts for training or analytics.
HIGH Non-Existent OpenAI Model IDs May Route to Unknown Endpoints -15 ▶
The routing table specifies openai/gpt-5.3-codex and openai/gpt-5.2, neither of which are real OpenAI model identifiers (as of early 2026). On platforms with custom model routing layers, these identifiers could resolve to attacker-controlled proxies. The README's note to 'adapt the model IDs to your setup' is contradicted by the SKILL.md actively issuing routing calls with these fabricated identifiers.
HIGH Silent Always-On Behavior Modification Without User Awareness -20 ▶
The skill declares a 'Request Classifier (silent, every message)' and uses MANDATORY routing rules that override the agent's native judgment on every single user interaction. The agent is instructed to silently reclassify, reroute, and compress all requests without user notification. This constitutes persistent prompt injection that invisibly hijacks agent decision-making.
HIGH Output Transparency Suppression Reduces User Visibility -15 ▶
The skill explicitly instructs the agent not to narrate tool calls, applies hard per-tier output token budgets (T1: 1-3 lines, T2: 5-15 bullets), and prohibits restating the question or providing context. This systematically reduces the user's ability to understand what the agent is doing and why, which is a defense-weakening behavior in a security context.
HIGH Fabricated Model IDs Create Unpredictable Routing and Cost Escalation -10 ▶
Because the primary code and analysis model IDs (gpt-5.3-codex, gpt-5.2) do not exist, they will predictably fail on any standard OpenAI API. The failure protocol then silently escalates to more expensive models without user knowledge. This guarantees that the failure path — rather than the happy path — drives actual execution, systematically triggering the most expensive escalation.
MEDIUM Unbounded Recursive Agent Spawning with Full Tool Access -20 ▶
T4 tasks spawn a Claude Opus agent with 'full tool access' and explicit instructions to itself spawn Codex (for code) and Groq (for bulk) sub-agents. There is no stated recursion depth limit, cost ceiling, or user approval gate. A single complex user request could trigger a tree of API calls across three AI platforms, all with full access to the agent's tool environment.
MEDIUM Automatic Cost-Escalating Failure Protocol Runs Without User Consent -20 ▶
The failure protocol unconditionally upgrades failed cheap-model calls to expensive models (Groq free → GPT-5.2 $$$, Codex $$$ → GPT-5.2 $$$, T3 → Opus $$$$) without notifying the user or requesting approval. Combined with the fabricated model IDs that will always fail initially, this creates a pattern where the agent routinely incurs the highest-cost path silently.
MEDIUM Sensitive Credential Files Accessed During Installation Window -15 ▶
Six credential files (.env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, gcloud application_default_credentials.json) were accessed in two distinct batches during the monitoring window. The first batch (before the git clone at T+6s) is attributable to sshd/PAM. The second batch (after install completes) is less clearly attributed. No writes or network transmission were detected; canary integrity confirmed intact.
LOW Execution Delegated to Multiple External AI Platforms via sessions_spawn -10 ▶
The skill contains no local executable code, but achieves effective code execution delegation by routing all code generation tasks to external AI services (Codex on OpenAI, Groq). These external environments operate outside the local sandbox with their own tool access. The spawned Opus agent is explicitly granted 'full tool access' in its execution context.
INFO Canary Files Read But Confirmed Intact -10 ▶
All honeypot credential files were opened and read during the monitoring window but show only CLOSE_NOWRITE events — no modifications were made and no network transmission of file contents was detected. The audit system's canary integrity check confirms all files remain unchanged. The reads are consistent with sshd/PAM authentication behavior.