Is lxgicstudios/jsdoc-gen safe?
https://github.com/openclaw/skills/tree/main/skills/lxgicstudios/jsdoc-gen
The jsdoc-gen skill is a TypeScript-based JSDoc/TSDoc generator with a clean installation and no prompt injection in its SKILL.md. However, two significant concerns elevate risk: first, SKILL.md and README.md instruct running 'npx ai-docs', which resolves to a different, unreviewed npm package ('ai-docs') than the audited source ('@lxgicstudios/ai-docs'), creating a supply chain gap where arbitrary code could be executed. Second, the tool's core functionality intentionally transmits all matched source files to OpenAI's API, meaning an agent invoking it on a broad directory could exfiltrate proprietary code or embedded credentials to a third party. The skill is not covertly malicious but carries structural risks that require user awareness before deployment.
Category Scores
Findings (8)
HIGH npx Command Targets Different npm Package Than Reviewed Source -35 ▶
The audited source is published as '@lxgicstudios/ai-docs', but SKILL.md and README.md both instruct users to run 'npx ai-docs' and 'npm install -g ai-docs' — the unscoped 'ai-docs' package on npm is a distinct package potentially owned by a different party. Any code in 'ai-docs' is completely unreviewed and could contain credential theft, backdoors, or arbitrary execution. This is a classic bait-and-switch supply chain vector: reviewed safe code is presented in the repo, but a different package is actually executed.
HIGH All Matched Source Files Transmitted to OpenAI API -28 ▶
The tool's core loop reads each matched file (up to 20KB) and sends its full content to OpenAI's chat completions endpoint. This is advertised behavior, but an agent invoking the tool on a broad directory could inadvertently transmit proprietary business logic, embedded credentials, database connection strings, or other sensitive content to a third-party service. The tool has no allowlist of safe file types and only excludes node_modules and dist directories.
MEDIUM AI-Generated LLM Output Written Directly to User Source Files -15 ▶
The --write flag causes unvalidated LLM output to overwrite user source files. The only 'sanitization' is stripping markdown code fences. An LLM could introduce subtle logic errors, remove important comments, alter function signatures in the 'documentation' pass, or in a compromised scenario inject malicious code snippets. Agents encouraged to use --write by the SKILL.md ('Write changes to files') could silently corrupt codebases.
MEDIUM Overly Broad File Matching May Capture Sensitive Configuration -7 ▶
The tool globs for **/*.{ts,tsx,js,jsx} under provided directories, excluding only node_modules and /dist/. Project roots commonly contain .env.example files (with real values), tsconfig.json with path aliases revealing infrastructure, jest.config.ts with environment setup, or configuration TypeScript files that embed API endpoints or connection strings. An agent performing a full-project doc pass could upload these to OpenAI.
LOW OPENAI_API_KEY Read from Environment at Runtime -5 ▶
The tool explicitly reads and uses OPENAI_API_KEY from the process environment. While legitimate for its stated purpose, if the executed npm package ('ai-docs') differs from reviewed source, the key could be exfiltrated. Additionally, repeated agent invocations will consume the user's OpenAI API quota without explicit per-invocation user approval.
LOW npx Execution Bypasses Audit — Runtime Code Unverified -12 ▶
SKILL.md directs agents to run 'npx ai-docs' without version pinning or integrity verification. Even if the reviewed '@lxgicstudios/ai-docs' source is benign, any prompt injection or malicious logic could be introduced post-audit in whichever npm package 'ai-docs' resolves to. The SKILL.md effectively instructs the agent to fetch and execute unaudited code on every invocation.
INFO Clean Installation — No Unexpected Network Activity or Process Spawning 0 ▶
During installation the skill cloned only from GitHub (140.82.121.3:443). Background traffic to 91.189.91.49 and 185.125.188.57 corresponds to Ubuntu Canonical infrastructure unrelated to the skill. No connections to attacker-controlled endpoints, no unexpected child processes, and all file writes were confined to the designated skill directory.
INFO All Honeypot Files Intact — No Credential Exfiltration at Install Time 0 ▶
The Oathe canary files (.env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, gcloud credentials) were not accessed or modified by the skill. Inotify and auditd accesses to these files at timestamps 1771919753 and 1771919770 are attributable to the Oathe audit system's own pre/post-scan baseline checks, not to the skill under test.