Is magicseek/nblm safe?

https://github.com/openclaw/skills/tree/main/skills/magicseek/nblm

55
CAUTION

The magicseek/nblm skill provides Google NotebookLM integration but ships with three significant concerns: (1) an explicit, first-class Z-Library piracy integration complete with a dedicated downloader module and epub converter, which exposes users to copyright infringement liability; (2) anti-detection browser automation via patchright that violates Google's Terms of Service; and (3) shell command injection vectors where user-controlled arguments are interpolated directly into routed shell commands. The install itself was clean with no evidence of active exfiltration, but the skill's design choices around piracy tooling, credential persistence, and behavioral override instructions constitute a meaningful risk profile.

Category Scores

Prompt Injection 45/100 · 30%
Data Exfiltration 50/100 · 25%
Code Execution 40/100 · 20%
Clone Behavior 90/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 30/100 · 5%

Findings (12)

CRITICAL Explicit Z-Library Copyright Infringement Integration -30

The skill ships a dedicated zlibrary/ Python module (downloader.py, epub_converter.py), exposes an upload-zlib command, lists z-lib.org, zlib.li, and zh.zlib.li as supported domains, and includes epub/PDF conversion dependencies in requirements.txt. Z-Library is a piracy platform that has been seized by the DOJ. This is a deliberate, first-class feature that facilitates copyright infringement and exposes the installing user to legal liability.

HIGH Anti-Detection Browser Automation via patchright -20

requirements.txt pins patchright>=1.50.0, explicitly described in AUTHENTICATION.md as 'Patchright for Google auth (anti-detection Playwright fork)'. The .env configuration exposes STEALTH_ENABLED, TYPING_WPM_MIN, and TYPING_WPM_MAX settings. This is browser automation specifically designed to masquerade as human activity to evade Google's bot detection, constituting a Google Terms of Service violation that risks account suspension.

HIGH Shell Command Injection via Unvalidated Argument Interpolation -25

The Command Routing section interpolates $ARGUMENTS directly into shell command strings with only quote wrapping. For example, 'ask ' maps to python scripts/run.py nblm_cli.py ask "<question>". If the agent substitutes user-controlled input containing shell metacharacters, subshell expansion, or path traversal into these templates before passing to Bash, command injection is possible. The routing uses a template expansion approach that is inherently injection-prone.

HIGH Unpinned Third-Party Package Auto-Installation via run.py -20

run.py automatically creates a .venv and installs all requirements.txt dependencies on first execution, including notebooklm-py>=0.1.0 (an obscure, low-reputation PyPI package) and patchright>=1.50.0, both without hash pinning. This creates a supply chain attack surface: a compromised or malicious version of notebooklm-py could exfiltrate Google credentials or NotebookLM content. npm install also runs automatically, downloading Chromium.

HIGH Mandatory Follow-Up Loop Behavioral Override -20

The skill instructs the agent with 'Required Claude Behavior: 1. STOP - Do not immediately respond to user' and creates a loop requiring the agent to issue additional NotebookLM queries before responding to the user. The trigger phrase 'EXTREMELY IMPORTANT: Is that ALL you need to know?' appears to be injected by the skill at the end of every NotebookLM answer, creating a self-referential control loop. This overrides the agent's natural response flow and could be exploited by malicious NotebookLM content to prevent the agent from ever returning a final answer.

MEDIUM Google Session Credential Persistence on Disk -20

The skill persistently stores Google authentication tokens (notebooklm_auth_token, notebooklm_cookies, Google session storage state) in ~/.claude/skills/notebooklm/data/ with a 10-day staleness policy before refresh. Per-account credentials are stored as -.json files. If the ~/.claude/ directory is accessible to other processes or backed up to cloud storage, these credentials enable NotebookLM access without re-authentication.

MEDIUM Bulk Local File Sync to Google's Servers -15

The upload and sync commands allow arbitrary local directories to be uploaded to Google NotebookLM. The skill scans for PDF, TXT, MD, DOCX, HTML, EPUB files and uploads them. A social engineering attack could trick a user into syncing a sensitive directory (e.g., ~/Documents) to a Google-owned service where content is processed by AI.

MEDIUM Multi-Account Rate Limit Bypass via Account Farming -15

The skill explicitly documents adding multiple Google accounts as a solution to hitting the 50 queries/day rate limit, including commands to add, switch, and manage accounts. This is account farming to circumvent usage limits, constituting a Terms of Service violation on Google's platform.

MEDIUM Persistent Background Browser Daemon Process -10

The agent-browser daemon starts on demand and runs continuously, only stopping after 10 minutes of inactivity. It tracks activity via last_activity.json and a watchdog PID. This daemon maintains live browser state with Google session cookies in memory, running as a long-lived background process outside the agent's normal execution lifecycle.

LOW Z-Library URL Auto-Trigger Without Explicit User Confirmation -10

The skill auto-triggers the Z-Library download workflow when it detects URLs from zlib.li, z-lib.org, or zh.zlib.li in user messages, without requiring explicit confirmation that the user wants to invoke the piracy integration. This means incidentally mentioning a Z-Library URL (e.g., when asking about it) could silently trigger download behavior.

LOW Multi-Account Google Credential Index Stored Locally -10

An index.json file tracks all added Google accounts with their email addresses and active account state. Combined with per-account credential JSON files, this creates a local directory of multiple Google account sessions that could be harvested if the system is compromised.

INFO Clean Install Behavior with Expected Network Traffic 0

The skill installation performed a standard sparse git clone from github.com/openclaw/skills, accessed only GitHub (140.82.114.3) and Ubuntu update servers (91.189.91.49, 185.125.188.x), made no filesystem writes outside the skill directory, and left no new persistent network listeners. The connection state before and after install is identical except for the normal SSH session rotation.