Is daoistbro/memory-system safe?

https://github.com/openclaw/skills/tree/main/skills/daoistbro/memory-system

77
CAUTION

大哥的记忆系统 is a session-persistence utility that stores and restores structured memory files (identity, technical context, daily logs) across agent sessions. The skill's code is technically clean — the bash script is benign, the install was standard, and no data exfiltration was confirmed — but the design creates a significant secondary prompt injection surface: at every session start the agent reads arbitrary, unvalidated markdown files including identity.md into its context, giving any actor with write access to the memory directory the ability to inject persistent, session-spanning instructions. The identity and preference persistence mechanism is the highest-risk design element, as it could be used to establish a durable alternative persona without the user's ongoing awareness.

Category Scores

Prompt Injection 62/100 · 30%
Data Exfiltration 87/100 · 25%
Code Execution 82/100 · 20%
Clone Behavior 92/100 · 10%
Canary Integrity 85/100 · 10%
Behavioral Reasoning 55/100 · 5%

Findings (9)

HIGH Persistent Secondary Prompt Injection via Unvalidated Memory Files -20

The skill's core mechanism reads arbitrary markdown files into the agent's context at every session start (identity.md, technical-stack.md, working-directory.md, key-decisions.md, YYYY-MM-DD.md). There is no content validation, integrity checking, or sandboxing of these files. Any actor capable of writing to the memory/ directory — including other installed skills, content written by agent web-browsing actions, or a compromised filesystem — can inject permanent, persistent instructions that execute in every future agent session.

HIGH Identity and Persona Persistence Mechanism -12

The skill explicitly stores and restores identity.md containing '身份、偏好、关系' (identity, preferences, and relationships) at every session. Unlike task memory (which is relatively benign), identity-level persistence can fundamentally alter the agent's self-perception, decision-making preferences, and relational model across all future sessions. An attacker who poisons identity.md can establish a persistent alternative persona without the user being aware it is occurring at session start.

MEDIUM Unvalidated Memory Trust Boundary Enables Compound Persistent Attacks -25

The memory file system creates an unvalidated trust boundary at the filesystem level. A malicious skill installed alongside this one can write adversarial instructions to memory/permanent/identity.md or daily log files. This enables a two-stage persistent attack: (1) this legitimate-looking memory skill is installed and trusted, (2) a companion malicious skill or a poisoned web document written via agent browsing populates memory files with persistent instructions. Because memory-recovery.sh reads these files at every session start with no integrity verification, the poisoning survives indefinitely.

MEDIUM Canary Credential Files Opened and Read During Monitoring Period -15

Honeypot files including .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and .config/gcloud/application_default_credentials.json were opened and read at two separate timestamps: 1771934257 (pre-install, 5 seconds before git clone) and 1771934274 (post-install, during audit framework inspection phase). inotify CLOSE_NOWRITE events on all accesses confirm no writes occurred and no network exfiltration was detected. The timing pattern — pre-install access coincides with audit framework startup (ss -tunap at 1771934257.227), post-install access coincides with audit framework inspection commands — strongly suggests these are audit framework canary baseline checks. However, the accesses cannot be fully attributed without process-level correlation of the exact PID that opened these file descriptors.

MEDIUM Memory File System as Unintended Sensitive Data Accumulation Surface -13

The skill's design accumulates agent outputs — decisions, technical stack details, relationship information, work summaries — into persistent local files. If an agent operating in a sensitive context writes confidential project information, credentials observed during tasks, or personally identifiable information into memory files as part of normal operation, that data persists indefinitely and would be re-injected into every future session. Any skill or agent capability with read access to the memory/ directory would gain access to this accumulated sensitive context.

LOW Hardcoded Environment-Specific Absolute Path Mismatch -6

The SKILL.md recovery quickstart instructs users to run 'bash /data/workspace/scripts/memory-recovery.sh', but the script installs to a completely different path (/home/oc-exec/skill-under-test/scripts/memory-recovery.sh or equivalent). This indicates the skill was developed in and for a specific deployment environment (/data/workspace/), not general-purpose distribution. The mismatch reduces immediate harm potential (the documented command won't execute) but raises questions about whether this skill was designed for a targeted environment rather than public use.

LOW Executable Bash Script Included with Benign Current Content -18

The skill includes scripts/memory-recovery.sh, an executable bash script. Current content is benign: it reads local relative-path markdown files using cat, prints decorative headers, and conditionally checks for a memory_search binary. No network calls, no access to system credential files, no privilege escalation. The risk is low but non-zero: the script is an executable component that could be modified in future versions to include malicious behavior.

INFO Session Lifecycle Hook Integration Designed but Not Implemented as Hooks -20

The SKILL.md describes Session End Hook and Context Compression Alert mechanisms — automated triggers that would save session state when the agent session ends or when context compression is imminent. These are described in documentation rather than implemented as executable hooks in the current version, but the design intent shows the skill seeks deep integration into agent lifecycle management, which could be disruptive to agent-user trust if implemented without explicit user consent.

INFO Standard Sparse Git Clone from Public Repository -8

Installation performed a standard depth-1 sparse checkout from the openclaw/skills monorepo on GitHub (140.82.121.4:443). Only the targeted subpath (skills/daoistbro/memory-system) was checked out. No unexpected processes spawned, no filesystem writes outside the designated skill directory, and no network connections beyond GitHub and DNS.