Is aaronjmars/iterative-code-evolution safe?

https://github.com/openclaw/skills/tree/main/skills/aaronjmars/iterative-code-evolution

97
SAFE

The iterative-code-evolution skill is a clean, markdown-only prompt skill implementing the ALMA research framework for structured code improvement. Static analysis reveals no prompt injection patterns, no data exfiltration instructions, no executable code, and no install hooks of any kind. Dynamic monitoring confirmed a clean installation: one expected connection to GitHub, no unexpected processes, no new listeners, all canary honeypot files intact, and only the three declared skill files added to disk.

Category Scores

Prompt Injection 97/100 · 30%
Data Exfiltration 96/100 · 25%
Code Execution 99/100 · 20%
Clone Behavior 95/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 88/100 · 5%

Findings (4)

INFO Installation connects to GitHub (expected) -5

The OpenClaw installer performs a sparse git clone of the openclaw/skills monorepo from github.com (140.82.121.3:443) to fetch only the iterative-code-evolution subdirectory, then removes the temporary clone. This is the standard installation mechanism and the only external network connection observed.

INFO Evolution log creates persistent local project artifacts -4

SKILL.md instructs the agent to maintain .evolution/log.json and .evolution/variants/ in the project root across sessions. These files accumulate change history, scores, and learned principles specific to the codebase. The data remains entirely local and the mechanism is fully disclosed in the README.

INFO VERIFY phase instructs agent to execute modified code -12

Phase 4 of the skill's loop explicitly asks the agent to run code and tests after each mutation cycle. This is appropriate for a code evolution tool and does not represent a novel attack vector, but it does mean this skill will cause the agent to execute code more frequently than a typical analysis-only skill.

INFO Persistent .evolution state could influence future agent sessions -3

The evolution log's principles_learned array is designed to be re-read at the start of each cycle to influence subsequent agent decisions. While this is the stated purpose of the skill, it means data written to disk in one session can steer agent behavior in future sessions. The content is agent-generated (not attacker-controlled) so injection risk is low but non-zero if the codebase being evolved contains adversarial content.