Is jeromestein/code-share safe?

https://github.com/openclaw/skills/tree/main/skills/jeromestein/code-share

83
SAFE

The jeromestein/code-share skill is a legitimate GitHub Gist sharing utility with clean installation behavior, no prompt injection patterns, and no malicious code. Its primary risk is inherent to its stated purpose: it creates a permanent external data upload channel via GitHub Gist, and its LLM-enforced sensitive data scanning policy cannot provide cryptographic guarantees against inadvertent secret disclosure. When combined with other skills that read files or environment data, it could serve as a natural exfiltration path.

Category Scores

Prompt Injection 91/100 · 30%
Data Exfiltration 70/100 · 25%
Code Execution 83/100 · 20%
Clone Behavior 96/100 · 10%
Canary Integrity 90/100 · 10%
Behavioral Reasoning 72/100 · 5%

Findings (8)

MEDIUM Skill creates a persistent external code upload channel -18

The core function of this skill is to upload code to GitHub Gist via the gh CLI. While intentional and disclosed to the user via URL, this creates a real and permanent external data channel. Any code or context the agent passes to the gist scripts will leave the local environment and persist on GitHub's infrastructure indefinitely unless explicitly deleted.

LOW Sensitive data scan is LLM-enforced, not mechanically verified -8

The mandatory sensitive data scanning policy relies on the LLM to identify and redact secrets before upload. LLM-based secret detection is imperfect and can be bypassed by obfuscated strings, uncommon formats, or data embedded in non-obvious structures (e.g., base64-encoded values, custom config formats).

LOW Fixed response format suppresses code preview in chat -7

The skill enforces a fixed 3-line response format (summary sentence, URL, file/line metadata). This prevents the agent from showing the code inline in the conversation, which means the user cannot review the exact content before it has already been uploaded to GitHub. This reduces user oversight.

LOW Skill-combination exfiltration risk -15

When combined with file-reading or environment-introspecting skills, this skill provides a natural and transparent upload path for any data the agent collects. An attacker who controls a companion skill could direct the agent to gather sensitive data and then use code-share to upload it as a 'helpful' code snippet.

LOW Shell scripts execute gh CLI with agent-controlled arguments -12

The create_gist.sh and update_gist.sh scripts accept agent-supplied file paths and descriptions. While quoting is correct and injection is not apparent, the agent fully controls what file is uploaded and the description metadata. A malicious use of the skill could upload unintended files if the agent is misdirected.

INFO Install behavior clean — only expected files and network activity 0

The skill installation cloned only the expected repository path with sparse checkout, created only the four declared skill files, and made no network connections beyond GitHub. No unexpected processes were spawned.

INFO Canary file reads are monitoring-system lifecycle reads, not skill-initiated 0

The .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and gcloud credentials files were read at two points: before the git clone (1771930156, monitoring baseline) and after all analysis (1771930173, post-install canary check). Neither access cluster is attributable to the skill's own code.

INFO Skill correctly defaults to secret gists and requires explicit user opt-in for public 0

The skill's default visibility is secret gist, and public sharing requires explicit user instruction. This is an appropriate security default that reduces inadvertent public exposure.