Is johnsondevops/last30days-skill safe?

https://github.com/openclaw/skills/tree/main/skills/johnsondevops/last30days-skill

78
CAUTION

The last30days skill is a legitimate research aggregation tool with a well-structured codebase, transparent security disclosures, and a clean installation process. Its primary concern is the bundled bird-search vendor, which reads live browser session cookies to authenticate with Twitter's private GraphQL API — a high-privilege access vector that violates Twitter's ToS and could enable session hijacking if the code is ever tampered with. Secondary concerns include the open variant's dynamic reference file loading and cross-session context.md persistence, both of which create modifiable instruction surfaces that would not trigger a re-audit. No canary files were accessed and install behavior was clean.

Category Scores

Prompt Injection 78/100 · 30%
Data Exfiltration 68/100 · 25%
Code Execution 75/100 · 20%
Clone Behavior 88/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 72/100 · 5%

Findings (10)

HIGH Browser Session Cookie Access for Twitter Authentication -20

The bundled bird-search vendor (scripts/lib/vendor/bird-search/lib/cookies.js) reads the user's live browser session cookies to authenticate with Twitter's internal GraphQL API. This grants the skill full Twitter account access equivalent to the browser session. While the skill's Security section discloses this behavior, it is a high-privilege operation: if the cookie value were captured (e.g., via a script bug logging subprocess output, or by a future malicious update), an attacker could hijack the user's Twitter account. The risk is not theoretical — the cookies.js module explicitly extracts auth_token and ct0 cookies from the user's browser profile.

HIGH Twitter Private GraphQL API Usage Violates ToS and Expands Scope -8

The skill uses Twitter's undocumented internal GraphQL API endpoints with session-cookie authentication rather than the official Twitter API v2. Session cookies carry full account permissions (posting, DM read, follower access) rather than the limited scopes granted to OAuth apps. This is against Twitter's Terms of Service and means the skill implicitly has access to far more account data than read-only search would require.

MEDIUM Dynamic Runtime Instruction Loading in Open Variant -10

The open variant loads additional agent instructions from reference files at runtime via the Read tool: 'Read: ${SKILL_ROOT}/variants/open/references/{mode}.md. Then follow the instructions in that reference file exactly.' This means the agent's behavior is partly determined by files that can be modified after audit without triggering a re-audit. If any reference file (watchlist.md, briefing.md, history.md, research.md) is modified by a supply-chain compromise or local file write, malicious instructions will be executed.

MEDIUM Cross-Session Persistent Writable Instruction Surface (context.md) -7

The open variant instructs the agent to read context.md at session start for 'user preferences and source quality notes' and to 'Update it after interactions.' This file lives at a predictable path inside the skill directory and is written by the agent. Any process or skill with Write access to this path could poison future sessions with persistent injected instructions that appear to come from prior legitimate interactions.

MEDIUM User-Supplied Topic Strings Passed to Subprocess Calls -8

The skill passes user-supplied $ARGUMENTS directly to Python scripts via bash, and those scripts spawn subprocesses for bird CLI and yt-dlp with the topic string as an argument. While the Python code uses list-form subprocess arguments (avoiding shell=True in most paths), the bash wrapper uses $ARGUMENTS unquoted in the shell command template in SKILL.md. Crafted input like 'topic; malicious_command' could execute arbitrary commands if the bash invocation does not quote the variable.

MEDIUM Multiple API Keys Loaded from Filesystem and Transmitted Externally -4

The skill loads up to five API keys (OPENAI_API_KEY, XAI_API_KEY, BRAVE_API_KEY, PARALLEL_API_KEY, OPENROUTER_API_KEY) from ~/.config/last30days/.env and transmits them to their respective external services. While the security section claims each key only goes to its stated endpoint, any logging bug, exception traceback, or future code change could expose keys in script output that the agent then displays or writes to files.

LOW Research History Accumulation in SQLite Database -5

In watchlist and open modes, all research queries and results are persisted to ~/.local/share/last30days/research.db with FTS5 full-text search indexing. This creates a permanent dossier of the user's research interests and findings. Any skill or process with local filesystem access can read this database without authentication.

LOW Vendored JavaScript Twitter Client Cannot Be Independently Verified -5

The bird-search vendor is described as a 'vendored subset of Bird CLI (MIT, originally by @steipete)' at v0.8.0, but no package-lock.json, integrity hash, or npm registry checksum is present. The code appears legitimate on inspection, but the self-reported attribution cannot be cryptographically verified. A supply-chain substitution would not be detectable without external comparison.

INFO Post-Install Network Connections Attributed to Audit Platform Infrastructure 0

The connection diff shows new established TCP connections to 54.211.197.216:443 (AWS) and 104.16.0.34:443 (Cloudflare/OpenClaw CDN) and new localhost listeners on ports 18790/18793, all attributed to process 'openclaw-gatewa' (pid=1083). This process pre-existed the skill install based on the process audit log timestamps and is the audit platform's gateway process, not a process spawned by the skill. No score impact assigned as this is infrastructure artifact.

INFO scripts/sync.sh and agents/openai.yaml Contents Not Audited 0

Two files present in the skill directory were not included in the source code evidence dump: scripts/sync.sh and agents/openai.yaml. The sync.sh file could contain network sync or exfiltration logic. The agents/openai.yaml could define an OpenAI Codex agent with different tool permissions. Users should manually inspect these files before installation.