Is iterdimensionaltv1/moltuniversity safe?

https://github.com/openclaw/skills/tree/main/skills/iterdimensionaltv1/moltuniversity

60
CAUTION

MoltUniversity poses significant ongoing risk through its heartbeat-based command-and-control architecture: the skill explicitly instructs agents to fetch and execute 'priority actions' from an operator-controlled remote endpoint on an autonomous 24/7 schedule, constituting a persistent remote prompt injection channel that bypasses all install-time review. The RunComputation feature combined with this C2 channel creates a credible remote code execution path, and the research evidence submission system provides high-quality cover for data exfiltration directed through heartbeat instructions. Installation itself was clean with no malicious code in skill files and canary credentials intact.

Category Scores

Prompt Injection 35/100 · 30%
Data Exfiltration 58/100 · 25%
Code Execution 72/100 · 20%
Clone Behavior 90/100 · 10%
Canary Integrity 90/100 · 10%
Behavioral Reasoning 50/100 · 5%

Findings (12)

CRITICAL Heartbeat endpoint is a persistent remote prompt injection C2 channel -40

The skill explicitly instructs agents to fetch https://www.moltuniversity.ai/api/heartbeat and 'Follow its priority actions.' The heartbeat returns arbitrary markdown that the agent must treat as actionable instructions. The skill operator can inject any instructions — read files, run computations, modify memory, exfiltrate data — at any time without updating or re-reviewing the skill. The recommended configuration runs this cycle automatically every 1 hour, 24 hours a day, 7 days a week with zero user-in-the-loop. This is architecturally indistinguishable from a command-and-control framework embedded in a skill.

HIGH Domain-personalized skill endpoint injects server-controlled content into agent context -15

The skill advertises GET /api/skill?domain=... which injects 'domain-specific guidance and active agendas in your area' at the top of the skill document at runtime. The operator can change what the agent considers its authoritative instructions at any time by updating server-side content, without publishing a new skill version and without triggering any user review flow. This is a persistent prompt injection surface that survives the install audit.

HIGH Autonomous heartbeat configuration enables 24/7 server-directed operation without user oversight -10

The skill provides a ready-to-paste heartbeat configuration that schedules the agent to autonomously fetch and execute server-directed priority actions every hour around the clock. Actions taken during these cycles — file reads, API submissions, code execution — occur without any user review or approval. This means the operator can direct consequential actions while the user is offline, sleeping, or otherwise unaware.

HIGH Heartbeat C2 can launder credential and file exfiltration through research evidence submissions -22

The heartbeat's 'priority actions' can direct the agent to read local files and incorporate their contents into research evidence moves (AddEvidence, RunComputation, Comment). Research submissions to the external API contain free-form text, structured metadata, and source excerpts — any of which can carry harvested local data. The research framing provides high-quality cover: submissions appear to be citations or computation outputs rather than exfiltrated credentials. A malicious operator needs only to craft heartbeat responses instructing the agent to 'search for prior work in /home/user/.ssh' or 'run this computation using the credentials in .env'.

HIGH RunComputation move enables host code execution; combined with heartbeat C2 this is remote code execution -18

The skill defines a RunComputation research move that explicitly executes notebooks and scripts on the host system and records their outputs. The skill's own security section warns 'RunComputation, execute code — sandboxing is mandatory' — but sandboxing is optional operator configuration, not enforced by the skill. Via the heartbeat C2, the operator can direct any registered agent to execute arbitrary code under the guise of a research computation, with outputs submitted back to the server as move metadata.

MEDIUM API key and session state instructed to persist in agent memory -12

The skill instructs agents to write their API key, research slug, active threads, and research notes to persistent memory between heartbeat cycles. While the template marks the key as '(stored securely in env)', the surrounding text explicitly says to 'Save to memory. Write your API key, slug, and research interests to your persistent memory file.' Persistent memory is accessible to any skill or process that reads the same file, creating a credential persistence risk across sessions.

MEDIUM openclaw security audit --deep --fix can auto-modify system security configuration -10

The Security section instructs agents to run 'openclaw security audit --deep --fix' before participating. The --fix flag automatically applies remediation changes to OpenClaw configuration, potentially altering gateway binding, authentication settings, DM policy, and sandbox configuration. This instruction contradicts the skill's own 'DO NOT guess CLI commands' guidance and uses an OpenClaw-specific command rather than curl. Auto-applying 'fixes' to security configuration without explicit user review is dangerous.

MEDIUM Skill is architecturally equivalent to a C2 framework embedded as a research community -30

Taken as a whole, the skill's design — persistent heartbeat polling, server-issued priority actions, memory continuity across cycles, autonomous 24/7 scheduling, and agent identity/trust-tier tracking — is functionally a command-and-control framework. The research community framing (peer review, citations, quality scores) provides social legitimacy and operational cover. A sophisticated attacker would use this architecture to enroll agents into a controlled botnet, leverage the trust tier system to progressively grant higher action limits, and use the research workflow to direct, collect, and exfiltrate agent actions over extended periods.

MEDIUM Combined with filesystem or shell skills this skill enables sophisticated multi-stage attacks -20

The skill is benign-to-moderate in isolation but becomes a high-severity threat when combined with skills that provide filesystem access, shell execution, or browser automation. The heartbeat can direct: (1) filesystem skill reads .env or .ssh/id_rsa, (2) evidence submission sends contents to the API, (3) RunComputation executes arbitrary payloads. Each step uses a different skill and a different legitimate-looking action, making the attack chain difficult to detect from individual action logs.

LOW Registration sends user PII to external server -8

The registration process requires submitting name, email, and domain to https://www.moltuniversity.ai/api/register. Depending on how the agent is configured or prompted, these fields may be populated with real user data. The optional 'registration secret' field suggests a controlled enrollment model where operators gate access, which may indicate intended commercial or tracking use.

LOW Clean installation with expected GitHub-only network activity -10

The git clone connected exclusively to GitHub (140.82.121.4:443) using a standard sparse-checkout procedure. No unexpected processes were spawned. No filesystem writes occurred outside the designated skill directory. The skill package contained only SKILL.md and _meta.json with no executable code, install scripts, git hooks, submodules, symlinks, or gitattributes filters.

INFO All canary files intact — no exfiltration detected during installation -10

The monitoring harness reports all honeypot credential files (.env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, GCP credentials) were not modified or exfiltrated. Read accesses appearing in the auditd log at timestamps 1771906190 and 1771906214 are consistent with the audit harness performing pre-install setup and post-install verification respectively, not with skill activity.