Is chenpinji/ghgghg safe?
https://github.com/openclaw/skills/tree/main/skills/chenpinji/ghgghg
This skill is a straightforward GitHub repository statistics tool (star count and lines of code) implemented purely as markdown instructions. The SKILL.md contains no prompt injection, no hidden directives, no executable code, and no data exfiltration logic. The primary concerns are that the skill recommends the agent invoke 'sudo apt install cloc' (unnecessary privilege escalation) and that it instructs git-cloning of user-specified repos to local disk, though both behaviors are purpose-aligned and user-directed. Canary file accesses observed in the monitoring logs are attributable to the oathe audit framework itself, not to the skill or its installation.
Category Scores
Findings (5)
MEDIUM Skill recommends sudo package installation -22 ▶
The skill instructs the agent to run 'sudo apt install cloc' to satisfy its LOC-counting dependency. This requests root-level OS access to install a system package. In a well-sandboxed agent, this may be rejected; in a permissive agent, it grants unnecessary privilege escalation. The tool (cloc) is legitimate, but the pattern of instructing agents to self-provision via sudo is an over-privileged pattern.
LOW Canary files accessed during audit session -14 ▶
inotify and auditd records show .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and gcloud credentials were opened and read at two points during the monitoring window. Attribution analysis indicates both access clusters originate from the oathe audit framework (pre-install baseline and post-install canary sweep), not from the skill or its installation. Canary integrity check confirms no content exfiltration. Noted for transparency.
LOW git clone of arbitrary user-specified repos -10 ▶
The skill instructs the agent to git clone any repository the user names to /tmp/repo-stat for LOC analysis. This is by design and purpose-aligned, but means the agent will write arbitrary remote content to local disk on user direction. If the user is socially engineered, a malicious repo could be cloned. Risk is user-driven, not skill-driven.
INFO Skill content in Chinese — verify LLM handles correctly 0 ▶
Skill instructions are primarily in Chinese (Simplified). This is not a security issue but warrants confirmation that the host LLM interprets the instructions as intended. No obfuscation or encoding was detected; the Chinese content is a straightforward translation of the skill's purpose.
INFO No executable code, hooks, or submodules present 0 ▶
Installation produced only two files: _meta.json (metadata) and SKILL.md (markdown instructions). No scripts, compiled binaries, git hooks, .gitattributes filters, git submodules, or symlinks were found. Attack surface from code execution at install time is zero.