Is jonbuckles/research-library safe?

https://github.com/openclaw/skills/tree/main/skills/jonbuckles/research-library

86
SAFE

The research-library skill is a legitimate Python research management tool with no detectable prompt injection, malicious code, or active exfiltration mechanisms. The suspicious canary file accesses observed during monitoring are attributable to the audit framework's own baseline and integrity-check operations, not to the skill itself, which was only installed via file copy and never executed. The primary risk is inherent to the skill's design: it is a general-purpose file ingestion system with persistent background workers and a stored-search integration pattern, creating a surface for misuse if an LLM agent is manipulated into importing sensitive files or if malicious content is introduced into the research database.

Category Scores

Prompt Injection 95/100 · 30%
Data Exfiltration 80/100 · 25%
Code Execution 80/100 · 20%
Clone Behavior 92/100 · 10%
Canary Integrity 85/100 · 10%
Behavioral Reasoning 75/100 · 5%

Findings (6)

MEDIUM Broad file ingestion stores arbitrary content in plaintext SQLite -20

The reslib add command accepts any filesystem path and extracts its content into a local SQLite database. An LLM agent manipulated into running 'reslib add /path/to/sensitive/file' would store the extracted plaintext in ~/.openclaw/research/library.db with no content filtering. OCR on images and PDF extraction mean even binary files yield readable content in the database.

LOW Shell script smoke_test.sh explicitly instructed to execute -20

SKILL.md contains the instruction 'bash reslib/smoke_test.sh' under the Testing section. An LLM agent that follows skill documentation literally would execute this shell script. While the script appears to be a validation tool, this represents an execution pathway that bypasses normal agent tool-use patterns.

LOW Background extraction workers run continuously after activation -20

worker.py implements 2-4 persistent background processes that poll the extraction queue and process file attachments. Once the skill is active, these workers run asynchronously, potentially processing files the user added without being prompted again. The agent has limited visibility into what the workers are doing.

LOW Ingested research content persists in agent memory and could carry prompt injection -25

The RL1 protocol integration means the agent checks the research database before conducting new research. If an attacker can influence what documents are stored (e.g., by getting the user to import a malicious PDF), that content persists and is surfaced in future agent context, creating a stored prompt injection vector.

INFO Canary credential files read-accessed during monitoring period -15

All six canary files (.env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, gcloud credentials) were opened during the monitoring window. Analysis of timing, sequential inode assignment, and the two-access pattern (pre-install and post-install) indicates these accesses originate from the audit framework creating and verifying its own canary files, not from the skill. All files were confirmed intact.

INFO Repository field contains placeholder URL -5

The SKILL.md frontmatter contains 'repository: https://github.com/[user]/research-library' with a literal placeholder rather than a real URL. This is a documentation quality issue, not a security threat, but indicates the skill was published without completing all metadata fields.