Is bugmaker2/fractal-memory safe?
https://github.com/openclaw/skills/tree/main/skills/bugmaker2/fractal-memory
The fractal-memory skill implements a hierarchical memory compression system that is functionally coherent but creates a significant unattended data pipeline: conversation diary content is logged by the agent, then nightly cron jobs silently forward that content to the DeepSeek LLM for summarization without per-execution user consent. The skill also instructs agents to permanently modify their AGENTS.md session startup and install autonomous cron jobs, creating persistent behavioral changes that survive beyond individual sessions. Two scripts contain a hardcoded personal username (/Users/brianq/) indicating the skill was not properly generalized before publication.
Category Scores
Findings (8)
HIGH Diary content silently forwarded to external DeepSeek LLM -35 ▶
rollup-daily.py constructs a Chinese-language prompt containing the full content of the user's daily diary file and pipes it to an external AI model via subprocess. This occurs nightly via cron with no per-execution user consent. Any sensitive information logged to memory files — including conversation details, API keys inadvertently logged, credentials, or PII — will be transmitted to DeepSeek.
HIGH Instructs agent to permanently modify its own AGENTS.md session startup -20 ▶
The Quick Start section explicitly directs the agent to add new instructions to the user's AGENTS.md file. This is a persistent agent behavior modification: every future session will load additional files and follow a different context loading order, even after the skill is no longer active. This is a form of prompt injection that survives skill removal.
HIGH Autonomous cron job creation creates persistent background agent execution -18 ▶
cron-setup.md provides complete cron job specifications directing the agent to register three scheduled jobs (23:59 daily, Sunday, last-day-of-month) that autonomously invoke agentTurn messages with shell commands. Once created, these jobs execute independently of user sessions, consuming API quota and processing sensitive memory data without per-execution user awareness.
MEDIUM Hardcoded personal username leaks author identity and will silently fail for all other users -10 ▶
Two scripts contain an absolute path hardcoded to /Users/brianq/.openclaw/workspace instead of using Path.home() as the other scripts correctly do. This leaks the original developer's system username and means ensure_daily_log.py and append_to_daily.py will silently write to the wrong location (or fail) for any user who is not 'brianq', indicating the skill was not properly tested or reviewed before publication.
MEDIUM Installs and arms 6 Python executables in user home directory -15 ▶
The Quick Start section instructs the agent to copy all scripts from the skill's scripts/ directory to ~/.openclaw/workspace/scripts/ and mark them executable. This permanently installs executable code in the user's home directory that persists after skill removal and is invoked by autonomous cron jobs.
MEDIUM Creates covert conversation data pipeline to third-party AI without explicit per-session consent -25 ▶
The combined effect of the skill's instructions is: (1) agent logs conversation details to daily diary files, (2) nightly cron job sends those files to DeepSeek for summarization, (3) summaries flow up to weekly/monthly/MEMORY.md files the agent reads at session start. Users installing this skill may not realize their conversation history is being persistently logged and periodically processed by a non-Anthropic LLM provider (DeepSeek).
LOW External moltbook.com URL references and heartbeat state field suggest potential external check-ins -7 ▶
architecture.md references three moltbook.com URLs as citations, and the heartbeat-state.json template contains a lastMoltbookCheck field. While no code in the provided scripts actively calls moltbook.com, the heartbeat state field name implies that heartbeat logic (possibly in the openclaw platform) may perform periodic external requests to that domain.
INFO Credential canary files opened post-install -10 ▶
Audit syscall logs show .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and GCloud credentials were opened (OPEN+ACCESS) at timestamp 1771905433.947, after skill installation completed. No writes or modifications detected; canary hashes confirmed intact. This pattern is consistent with the audit framework performing a final verification sweep rather than skill-initiated access.