Is amitabhainarunachala/agentic-ai-gold safe?
https://github.com/openclaw/skills/tree/main/skills/amitabhainarunachala/agentic-ai-gold
AGENTIC AI GOLD STANDARD is primarily a commercial sales document masquerading as an agentic skill: its SKILL.md (the file injected into agent system prompts) is empty, its core agentic_ai Python module does not exist, and its documentation embeds fraudulent capability claims alongside $49–$499 purchase CTAs. The principal runtime risk is install.sh, which would install mem0 and zep-python — packages that by design mirror agent conversation data to external cloud services — and does so silently suppressing all errors. A mid-install .env file access was observed that the audit harness's canary checks cannot fully account for.
Category Scores
Findings (8)
HIGH install.sh installs memory packages that exfiltrate agent data by design -25 ▶
The install script executes pip install mem0 zep-python. Both packages are agent memory backends whose core function is to synchronize conversation data with external cloud APIs (mem0ai.com, getzep.com). Any agent that runs under this skill and uses its documented memory architecture will have its conversation history sent to third-party services outside the user's control.
HIGH Fraudulent capability claims and commercial marketing embedded in skill documentation -30 ▶
skill.md claims a working agentic_ai Python module with 16/17 integration tests passing, self-improving Darwin-Gödel engine, and 17 dharmic security gates. None of these exist — the module import would fail. The documentation also embeds $49/$149/$499 pricing tiers and purchase links, meaning an agent loading this skill would be given sales copy as operational context.
MEDIUM Unexplained .env file access mid-install -13 ▶
A single read access to /home/oc-exec/.env occurred at timestamp 1771736907.663 (auditd msg 5476), approximately 9 seconds into the install process and isolated from the batch canary checks performed by the audit harness. No corresponding EXECVE event clearly attributes this access to a specific process spawned by the skill. Source could be a pip package setup.py hook or Python dotenv auto-load.
MEDIUM install.sh runs pip install without version pinning, enabling supply-chain substitution -22 ▶
The install script installs 6 packages without pinned versions. If any of these package names were typosquatted or if a future malicious version were published (e.g., openai-agents is not an official OpenAI package name), the install would silently execute arbitrary code. All failures are suppressed with 2>/dev/null || true.
MEDIUM Self-improvement narrative may cause agents to attempt unauthorized self-modification -20 ▶
skill.md repeatedly asserts that the skill 'researches, evaluates, and improves itself' overnight, with 'Shakti Flow ACTIVE' status. An agent internalizing this framing could attempt to call clawhub CLI commands, install additional packages, or modify its own skill files, believing this is expected behavior.
LOW Skill declares dependency on second skill (academic-research-hub) -15 ▶
The .clawhub/lock.json shipped with this skill records academic-research-hub version 0.1.0 as a pre-installed dependency. This means full use of this skill requires installing a second skill, whose security properties were not evaluated here.
LOW Misleading frontmatter fields could affect agent frameworks that parse skill metadata -12 ▶
skill.md YAML frontmatter includes self_improvement_enabled: true, shakti_flow: ACTIVE, dharmic_gates: 17/17 ACTIVE, and integration_test: 16/17 PASSING. Frameworks that parse frontmatter and expose these as agent capabilities would mislead the agent about its actual operational state.
INFO Canary file access pattern consistent with audit harness operation -8 ▶
All six canary files were accessed in two batches: msg 388-393 at session start (audit harness initial scan) and msg 6843-6848 post-install (audit harness final scan). Files are intact with no modifications or exfiltration network events observed.