Is lordshashank/devlog safe?
https://github.com/openclaw/skills/tree/main/skills/lordshashank/devlog
The lordshashank/devlog skill is a legitimate AI session transcript reader and blog generator that contains no prompt injection, obfuscated code, or active malicious behavior. However, it is architecturally designed to read full AI coding session histories (which contain sensitive conversations, code, and potentially credentials) and transmit synthesized content to an external GraphQL endpoint via curl, creating a read-then-publish data pipeline that poses meaningful privacy risk if misused or if users do not fully understand what data is being extracted and sent. Installation is safe in a controlled environment where the user understands the scope of session data access and external publishing, but the skill should not be installed without explicit user awareness of its broad filesystem traversal and external transmission capabilities.
Category Scores
Findings (9)
HIGH Broad session transcript access by design -30 ▶
The skill explicitly instructs the agent to read full AI coding session transcripts from multiple platform storage locations. These JSONL files contain complete human-agent conversations that may include sensitive project context, source code, internal file paths, business logic, and credentials that users inadvertently typed in chat. The skill then synthesizes this content into a blog post intended for public publication.
HIGH Session-derived content transmitted to external GraphQL endpoint via curl -20 ▶
publish.sh constructs a Hashnode GraphQL publishPost mutation containing the full blog contentMarkdown and POSTs it to gql.hashnode.com using the user's PAT. The blog content is synthesized from session transcript data. While user-confirmed, the endpoint is external and the curl command is constructed by the skill's shell script. A substituted endpoint or modified script would silently exfiltrate session content.
MEDIUM Unbounded filesystem traversal for unknown platforms -15 ▶
When no matching platform directory exists in references/platforms/, SKILL.md instructs the agent to manually scan ~/.local/share/, ~/.config/, and ~/Library/ for any JSONL, JSON, or SQLite session files and inspect their schema. This is a near-complete home-directory scan that extends well beyond session transcripts.
MEDIUM Read-then-publish pipeline creates data exfiltration vector -25 ▶
Phases 2–6 form a continuous pipeline: discover sessions → read full transcripts → generate blog post from transcript content → publish to external URL. A user who is social-engineered into running 'write a devlog about what we built' and then confirming 'yes publish to Hashnode' has transmitted sensitive session data to an external server in a single interaction with no intermediate review of what was extracted.
MEDIUM 11 shell scripts executed by agent on user filesystem -15 ▶
The skill bundles 11 executable shell scripts across all platforms. The agent is instructed to run these with user-controlled arguments (project names resolved to filesystem paths). While the scripts appear legitimate, this creates a significant code execution surface that an attacker could exploit through a modified version of the skill.
LOW API credentials collected from user mid-session -10 ▶
Phase 6 step 5 instructs the agent to prompt users for HASHNODE_PAT and HASHNODE_PUBLICATION_ID if absent from environment. These credentials are then used in curl. If the current session transcript is later read by another devlog invocation, the credentials could appear in the transcript content processed by the skill.
LOW Cover image upload to arbitrary public host -10 ▶
Phase 6 step 6 instructs the agent to upload generated cover images to 'any publicly accessible host (image CDN, S3 bucket, GitHub raw URL, drive etc.)' before passing the URL to publish.sh. In a compromised scenario, an attacker could substitute an attacker-controlled upload endpoint to receive image metadata or embed data in image filenames/URLs.
INFO Clean install with expected network activity only 0 ▶
Installation contacted only GitHub for the sparse git clone. All other network connections during the monitoring window were attributable to pre-existing Ubuntu system services (snapd, update-notifier, Ubuntu One). No unexpected process spawning or out-of-scope filesystem modifications detected.
INFO No canary file compromise detected 0 ▶
All honeypot files remained intact throughout the audit. PATH audit records for canary files at 1771924067 predate skill installation (clone began at 1771924072.749) and are attributable to sshd PAM authentication. Post-install records at 1771924084 are consistent with the audit framework's own integrity verification scans.