Is joshp123/pr-commit-workflow safe?
https://github.com/openclaw/skills/tree/main/skills/joshp123/pr-commit-workflow
This skill is a legitimate, transparently designed PR and commit workflow tool with no evidence of malicious intent or hidden behaviors. Its risks are architectural: the mandatory verbatim prompt history in public PR bodies can expose sensitive user conversations, and the unconditional AGENTS.md read instruction creates a well-known indirect prompt injection surface that any repository the agent works in can exploit. The included bash script performs only disclosed system enumeration.
Category Scores
Findings (6)
HIGH Full verbatim prompt history published in GitHub PR bodies -20 ▶
references/workflow-pr.md instructs the agent to include the full prompt history verbatim in every PR body, with the only mitigation being 'redact only the sensitive portion' — which relies on the agent reliably detecting sensitive content. Any credentials, API keys, internal architecture details, or PII mentioned in user prompts before or during a PR workflow would be published to GitHub. This is a design-level choice, not hidden behavior, but the risk is significant.
MEDIUM AGENTS.md read instruction creates repo-level indirect injection surface -20 ▶
The skill instructs the agent to read and follow AGENTS.md or docs/agents/PROCESS.md from every repository it works in, with no warning that these files may contain adversarial content. An open-source maintainer or collaborator who controls a target repo can craft AGENTS.md to override skill behavior, inject additional commands, or redirect the agent's actions within that repo's context. This is the standard 'second-order prompt injection via trusted file' attack pattern.
MEDIUM Environment metadata collection reveals AI operational details in public PRs -10 ▶
build_pr_body.sh probes for AI harness type via env vars (CODEX_MODEL, OPENAI_MODEL, ANTHROPIC_MODEL, CLAUDE_MODEL, CURSOR_MODEL, LLM_MODEL) and directory existence (~/.codex, ~/.claude, ~/Library/Application Support/Cursor), then publishes harness, model name, thinking level, terminal, and OS to the PR body. While individually low-value, this discloses the AI toolchain used for each contribution to anyone who reads the PR.
LOW Bash script performs system fingerprinting on execution -12 ▶
scripts/build_pr_body.sh is invoked by the workflow and runs uname -s, sw_vers -productVersion, and lsb_release -ds, reads ~10 environment variables, and checks for AI tool directories. All operations are disclosed and benign, but the script constitutes executable code that enumerates the host environment at PR-creation time.
LOW AGENTS.md mechanism enables repository-mediated chain attack -20 ▶
Because the skill unconditionally defers to AGENTS.md in any repository, a sophisticated attacker can create a repository (e.g., a dependency, an open-source project, or an invitation to contribute) and use AGENTS.md to inject instructions that execute within the skill's operational context — potentially redirecting git operations, manipulating commit messages, or adding malicious content to PRs while appearing to follow normal workflow.
INFO Credential file accesses during install attributable to monitoring infrastructure -10 ▶
Filesystem monitoring detected accesses to .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and GCloud credentials. Timing analysis places the first set (1771904993) during PAM/sudo initialization concurrent with monitoring setup, and the second set (1771905011) after install completion — consistent with the Oathe canary integrity verification pass. No evidence these accesses were caused by skill code; canary files were unmodified.