Is dannydvm/engram-memory safe?
https://github.com/openclaw/skills/tree/main/skills/dannydvm/engram-memory
The engram-memory skill's SKILL.md is structurally clean with no prompt injection, hidden instructions, or persona manipulation. The primary risk is the unaudited npm package 'engram-memory': the skill's 'No cloud' claim cannot be verified, the binary intercepts every agent task via a mandatory boot sequence, and the 'engram export' command creates a complete memory dump that is trivially exfiltrable by any co-installed network-capable skill. No malicious behavior was observed during the clone phase, but the npm install phase was not executed in this sandbox.
Category Scores
Findings (8)
HIGH Unaudited npm package is the sole trust anchor -20 ▶
The entire security posture depends on the 'engram-memory' npm package behaving as advertised ('No API keys. No cloud.'). The package was not installed or inspected during this audit. Malicious postinstall scripts or phone-home behavior in the binary cannot be ruled out.
HIGH engram export creates a full memory dump attackable by other skills -18 ▶
The 'engram export > backup.json' command documented in SKILL.md creates a JSON file containing all stored memories. Any subsequent skill or agent tool call that reads and transmits files could exfiltrate the complete conversation history and all stored decisions.
MEDIUM Persistent binary installs into agent execution path -18 ▶
The skill installs a system-level binary 'engram' that persists between agent sessions and runs on every task via the mandatory boot sequence. This is a significant attack surface expansion — the binary has full access to the filesystem and environment of the agent process.
MEDIUM Mandatory boot sequence creates an inversion-of-control hook -12 ▶
The 'Always recall before working' instruction makes the engram binary a mandatory pre-task checkpoint. If the binary returns manipulated memories or injected instructions, the agent would act on them before executing any user request. This is not evident in SKILL.md itself but is a design-level concern.
MEDIUM Conversation piping into persistent binary storage -15 ▶
'echo "Raw conversation text" | engram ingest' instructs the agent to pipe arbitrary text (including potentially sensitive conversation content, API keys mentioned by users, credentials) into the engram binary for permanent storage.
LOW Combination risk: memory accumulation plus network-capable skills -20 ▶
On its own the skill appears benign. However, if installed alongside a skill that can make HTTP requests or write to external storage, the accumulated memory dump (backup.json) becomes a high-value exfiltration target requiring only a single additional skill invocation.
LOW Clone-phase behavior clean — npm install phase not tested -8 ▶
The audit sandbox only cloned SKILL.md and _meta.json. The npm install phase (where postinstall scripts and the actual binary would run) was not executed, so clone-phase cleanliness does not imply install-phase safety.
INFO Canary files read but not modified during audit window 0 ▶
Honeypot files (.env, id_rsa, .aws/credentials, .npmrc, .docker/config.json, gcloud credentials) were opened and read during the audit. All reads were CLOSE_NOWRITE and correlate with audit harness pre/post timing, not skill installation. Integrity confirmed.