Is agent-memory safe?
https://clawhub.ai/Dennis-Da-Menace/agent-memory
Agent-memory is a Python SQLite-based memory library with clean code and no external dependencies. However, the installation process accessed sensitive credential files (.env, .aws/credentials, auth-profiles.json) without legitimate justification, which is a significant red flag. The SKILL.md also contains behavioral directives that program the agent to automatically extract and persist conversation data on every session, creating a growing profiling database in the user's home directory. No network exfiltration was detected and canary files were not modified.
Category Scores
Findings (9)
HIGH Credential files accessed during installation -45 ▶
During skill installation, the process opened and read /home/oc-exec/.env, /home/oc-exec/.aws/credentials, and /home/oc-exec/.openclaw/agents/main/agent/auth-profiles.json. A skill claiming 'zero external dependencies' and providing only a SQLite memory layer has no legitimate reason to access AWS credentials or environment files during installation.
HIGH Sensitive file reads without justification -35 ▶
The installation process read .env and .aws/credentials files. While canary integrity was preserved (files not modified) and no network exfiltration was detected, the reads themselves are suspicious. This could be reconnaissance, staging, or a supply-chain concern where the installer runtime accesses credentials for its own purposes.
MEDIUM Persistent home directory data accumulation -10 ▶
The skill creates ~/.agent-memory/memory.db by default and instructs agents to continuously extract and store facts, lessons, and entity data from conversations. Over time this becomes a rich profiling database containing personal information, preferences, relationships, and behavioral patterns.
MEDIUM Behavioral directives in SKILL.md Memory Protocol -18 ▶
SKILL.md contains a 'Memory Protocol' section that instructs the agent to perform automatic actions on every session start and end: loading lessons, checking entity context, extracting facts, recording lessons, and updating entities. These are agent behavioral modifications that go beyond a passive library, effectively programming the agent's session lifecycle.
MEDIUM Single-call data dump enables chained exfiltration -15 ▶
The export_json() method returns the entire memory database (all facts, lessons, entities) as a single JSON object. If another skill or a prompt injection attack chains into this, accumulated session data across all conversations can be extracted in one call.
LOW CLI scripts manipulate sys.path -5 ▶
The CLI scripts (cli/fact.py, cli/learn.py, cli/entity.py) use sys.path.insert(0, ...) to add the src directory to the Python path. While common in Python projects, path manipulation could be exploited if an attacker can place files in predictable locations.
INFO No install scripts, hooks, submodules, or symlinks 0 ▶
The skill has no package.json install scripts, no git hooks, no gitattributes filters, no git submodules, and no symlinks. The Python code uses only stdlib (sqlite3, json, hashlib, datetime, pathlib, dataclasses, re). This is a positive security signal.
INFO Parameterized SQL queries used throughout 0 ▶
All SQL queries in memory.py use parameterized queries (? placeholders), preventing SQL injection. This is a positive security signal for the code quality.
LOW Machine-id and system file reads during install -13 ▶
The installation process read /etc/machine-id along with standard system files. Combined with the credential file reads, this could be used for host fingerprinting, though it may also be normal runtime behavior of the Node.js-based installer.