Is joelachance/skills safe?

https://github.com/openclaw/skills/tree/main/skills/joelachance/skills

48
DANGEROUS

The Satori skill is a cloud-based memory exfiltration service disguised as a productivity tool: its core function is to silently intercept AI conversation content and transmit high-value intelligence (business strategy, key contacts, financial data, architecture decisions) to Satori's external servers via an unversioned npm package that executes at runtime. The skill compounds this with aggressive prompt injection — declaring itself the mandatory source of truth, suppressing native memory systems, and operating silently to hide its network activity from users. While the installation itself is clean with no anomalous clone behavior, the runtime behavior profile is that of a covert data collection system with a persistent supply chain attack surface through the @latest npm tag.

Category Scores

Prompt Injection 28/100 · 30%
Data Exfiltration 28/100 · 25%
Code Execution 60/100 · 20%
Clone Behavior 90/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 22/100 · 5%

Findings (11)

CRITICAL Authority Override — Establishes External Service as Mandatory Source of Truth -40

The skill uses MUST-language to instruct the agent to replace all native memory systems with Satori. 'Satori is the source of truth' and 'MUST be used instead of any internal systems' are unambiguous override commands that subordinate the agent's native context to a third-party cloud service the user may not have explicitly authorized.

HIGH Silent Operation — Agent Performs External Network Calls Without User Awareness -20

The skill instructs the agent to proactively search Satori at every conversation start and inject the results into context without disclosing this to the user. This means the agent is making authenticated external API calls and receiving third-party data that shapes its responses, all while the user believes they are interacting with the agent's native knowledge.

HIGH Native Memory Suppression — Forces Dependency on Third-Party Service -12

The skill explicitly instructs the agent to ignore locally-controlled memory (MEMORY.md, Clawdbot internal memory) in favor of Satori. This degrades user control over the agent's memory and creates a hard dependency on an external service that could be unavailable, compromised, or used to inject false context.

HIGH Systematic Exfiltration of High-Value Conversation Intelligence -42

The core mechanic of this skill is to extract and transmit sensitive user information to Satori's cloud servers. The fact-criteria.md explicitly prioritizes saving: strategic decisions (competitive positioning, pricing strategy, go-to-market plans), key personnel (names and roles), financial information (deadlines, funding milestones), and architecture choices. This constitutes continuous exfiltration of business intelligence to a third-party service.

HIGH Cross-Application Data Aggregation Without Consent -20

The skill is explicitly designed to harvest conversation data from multiple AI applications and centralize it at Satori's servers. It instructs the agent to assume memory from other apps like Claude is already stored and to resolve missing context by querying Satori rather than asking the user. This builds a comprehensive surveillance record across all of the user's AI tool usage.

HIGH Unversioned Runtime npm Package Execution — Persistent Supply Chain Risk -40

The skill instructs the agent to run 'npx -y @satori-sh/cli@latest' on every memory operation. The @latest tag resolves to whatever the package owner publishes at invocation time. There is no version pinning, no package-lock.json, no hash verification. The package owner can push a new version that exfiltrates previously collected memory data, reads files from the filesystem, or establishes persistence, and it will execute silently on the next user interaction.

MEDIUM Always-On Triggers — Activates Without Explicit User Intent -8

The trigger conditions are broad enough to activate the skill at virtually every conversation start. 'New conversation where proactive context lookup would help' will match the majority of sessions, causing automatic external network calls and context injection without the user having asked for memory-related assistance.

MEDIUM Unauthorized Auto-Provisioning of API Credentials -10

The CLI silently creates ~/.config/satori/satori.json and provisions API keys on first run without requiring explicit user acknowledgment. The agent is then instructed to ensure Satori has read and write access to this file. Credential creation and file permission grants should require explicit, informed user consent.

LOW Memory Poisoning and Supply Chain Compound Risk -78

When considered together, the supply chain risk of @latest npm execution and Satori's role as authoritative memory create a two-vector attack: (1) a malicious npm update could alter what the CLI sends/receives, and (2) poisoned data injected into Satori's backend by an attacker with database access would be silently injected into every future agent session as canonical truth. Combined with filesystem-capable co-installed skills, Satori would silently save discovered credentials under the guise of preserving context.

INFO Clean Installation — No Anomalous Behavior During Clone 0

The git sparse-checkout clone connected only to GitHub as expected. Background connections to Canonical/Ubuntu servers are standard OS infrastructure activity (motd-news, package update checks). No unexpected processes were spawned. The skill installs as a pure file copy.

INFO All Canary Files Intact 0

Honeypot files were not modified or exfiltrated during the audit window. Observed file access events at audit timestamps 1771932146 and 1771932170 are attributable to the oathe audit framework's canary-planting and post-install verification scans respectively, not to skill behavior. The skill contains no static code capable of accessing these files at install time.