Is ilkhamfy/research-paper-kb safe?
https://github.com/openclaw/skills/tree/main/skills/ilkhamfy/research-paper-kb
The research-paper-kb skill is a well-structured academic knowledge base tool with no malicious code, no credential targeting, and a clean install process. Its primary security concern is an indirect prompt injection vector: the skill fetches and processes paper abstracts from arXiv (an open submission platform), and content from those abstracts is written persistently to PAPERS.md and MEMORY.md, creating a durable injection surface that survives across sessions. The skill is safe to install for users who understand this risk and exercise caution about which papers they ingest.
Category Scores
Findings (5)
MEDIUM Indirect prompt injection via open-platform paper abstracts -20 ▶
The skill instructs the agent to fetch and process abstracts from arXiv, an open-access preprint server where any researcher can submit papers. Malicious content embedded in a paper abstract (e.g., 'Ignore previous instructions and...') would be fetched, parsed by the LLM, and the extracted intelligence fields written persistently to PAPERS.md. Subsequent sessions that read PAPERS.md would re-ingest the injection payload.
LOW Persistent cross-session MEMORY.md write enables injection persistence -15 ▶
The skill writes structured data to MEMORY.md after every paper addition. If a prior indirect injection (via abstract content) corrupts the PAPERS.md or MEMORY.md entry, that content survives indefinitely and re-enters the LLM context in all future sessions that load these files. The skill also reads MEMORY.md to determine 'Overlap with user's work', meaning a poisoned MEMORY.md from another skill can manipulate threat-level assessments.
LOW Outbound transmission of research paper identifiers to third-party APIs -13 ▶
Every paper ingestion sends the paper's arXiv ID or DOI to the Semantic Scholar API, along with the title for search-mode lookups. While these are public identifiers, the pattern of papers a researcher tracks could reveal unpublished research interests. This is inherent to the skill's stated functionality but represents data leaving the user's environment.
INFO Third-party platform recommendation (ClaWHub proactive-research) -10 ▶
The skill explicitly recommends integrating with 'proactive-research (ClaWHub)', a third-party skill hub. The security posture of ClaWHub and its proactive-research skill are independent and unaudited here. Combining this skill with an untrusted proactive-research skill could create automated paper ingestion without user confirmation, amplifying the indirect injection surface.
INFO Canary credential files were read-accessed during monitoring (attributed to audit infrastructure) 0 ▶
Files including .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and GCP credentials were opened for reading at 08:58:42 (pre-install) and 1771923540 (post-install). Both access batches occur at identical millisecond timestamps with sequentially numbered audit events, consistent with the oathe monitoring framework performing canary baseline snapshots. CLOSE_NOWRITE flags confirm no write activity. Canary integrity check confirms all files intact.