Is save-money safe?
https://clawhub.ai/peterokase42/save-money
This skill is a behavioral modifier that hijacks the agent's decision loop on every message by injecting routing logic into the system prompt via the description field. While it contains no executable code, no data exfiltration mechanisms, and no malicious payloads, its aggressive prompt injection pattern — forcing tool invocation, suppressing transparency about model switching, and overriding default agent behavior — represents a significant trust concern. The skill's stated purpose of cost savings is undermined by overly broad escalation triggers that could increase costs for many users.
Category Scores
Findings (8)
HIGH Persistent behavioral hijack via description field -40 ▶
The skill embeds a complete behavioral instruction set in the YAML description field, which is always injected into the system prompt. This includes imperative commands that override the agent's default behavior on every single message, forcing task classification and model routing before any response. The author explicitly documents this as intentional: 'the core routing logic must live in the description so the model always sees it.'
HIGH Forced tool invocation on every complex interaction -15 ▶
The skill instructs the agent to call sessions_spawn() IMMEDIATELY for any task matching broad escalation triggers, effectively forcing automated tool use without per-instance user consent. The triggers are extremely broad (any prompt >200 chars with requirements, any multi-step task, any structured output request).
MEDIUM Transparency suppression — hides model switching from user -10 ▶
The skill explicitly instructs the agent to conceal its model-switching behavior from the user, reducing transparency about what the agent is doing and potentially masking cost implications.
LOW Full task context forwarded to spawned sessions -10 ▶
The sessions_spawn call forwards the complete user task description to a new session. While this is within ClawdBot's expected session model, it means any sensitive information in the user's prompt is automatically propagated to additional API calls, increasing the exposure surface.
LOW AWS credentials file accessed during runtime initialization -15 ▶
The filesystem monitor recorded an OPEN+ACCESS on /home/oc-exec/.aws/credentials during the clone/install phase. This appears to be part of the standard ClawdBot runtime initialization rather than skill-triggered behavior, but the access to cloud credentials during skill installation is noteworthy.
MEDIUM Overly broad escalation triggers may increase costs -35 ▶
Despite claiming to save 50%+ on API costs, the escalation triggers are so broad that most non-trivial interactions would be escalated to the more expensive Sonnet model. Any prompt over 200 characters with specific requirements, any multi-step task, any structured output, or any professional context triggers escalation. This could paradoxically increase costs for many users.
MEDIUM Transparency suppression reduces user agency -35 ▶
The instruction to hide model switching combined with intercepting every message creates an opaque behavioral layer. Users cannot make informed decisions about cost and quality tradeoffs if the agent conceals which model is handling their request.
INFO Author acknowledges description-as-injection vector 0 ▶
The skill documentation section 'Why the description field is so long' explicitly describes using the description field as an injection point for persistent behavioral rules, demonstrating awareness that this is an unconventional use of the field.