Is 1kalin/afrexai-legacy-modernization safe?

https://github.com/openclaw/skills/tree/main/skills/1kalin/afrexai-legacy-modernization

96
SAFE

This skill is a pure knowledge/methodology document for legacy system modernization with no executable code, no external data fetches, no sensitive file access, and no prompt injection techniques. All monitoring signals (filesystem, network, process, canary) are clean. The only notable items are standard natural language command mappings within the skill's domain and commercial upselling of paid context packs in the README.

Category Scores

Prompt Injection 95/100 · 30%
Data Exfiltration 100/100 · 25%
Code Execution 95/100 · 20%
Clone Behavior 95/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 90/100 · 5%

Findings (3)

INFO Natural language command mappings in SKILL.md -5

The skill defines a 'Natural Language Commands' table that maps 12 user phrases to specific agent behaviors (e.g., assessment workflows, strategy selection, risk analysis). All mapped actions are within the skill's stated domain of legacy system modernization and do not request any system access, override instructions, or modify agent behavior outside the skill's scope.

INFO Unrelated skill dependency in lock.json -5

The .clawhub/lock.json file references 'academic-research-hub' v0.1.0 as an installed skill dependency. This seems unrelated to legacy modernization and may be a leftover from the author's development environment. It does not introduce any executable code or security risk.

LOW Commercial upselling in README.md -10

The README promotes four paid 'Context Packs' ($47 each) for industry-specific modernization guidance, linking to an external GitHub Pages site (afrexai-cto.github.io/context-packs/). While not a security risk, users should be aware this free skill serves as a funnel for paid products. The external links are in README.md only, not in SKILL.md which is injected into the agent's context.