Is chair4ce/braindb safe?

https://github.com/openclaw/skills/tree/main/skills/chair4ce/braindb

63
CAUTION

BrainDB is a functionally sophisticated persistent memory skill for AI agents with a legitimate use case, but it ships with the skill author's personal infrastructure hardcoded into execution-awareness.js—including specific Discord server and channel IDs that would redirect any installing user's agent alerts to those destinations, and operator-specific failure memories that would misinform the AI about tools and scripts that don't exist in the user's environment. The clone phase is clean (GitHub only, no unexpected network connections, no canary file modification), but the persistent memory injection mechanism poses a subtle, long-lived behavioral risk that survives context resets. The optional Gemini API file exfiltration and broad auto-capture of tool outputs are disclosed in documentation but still represent meaningful data aggregation risks.

Category Scores

Prompt Injection 50/100 · 30%
Data Exfiltration 60/100 · 25%
Code Execution 72/100 · 20%
Clone Behavior 85/100 · 10%
Canary Integrity 75/100 · 10%
Behavioral Reasoning 50/100 · 5%

Findings (10)

HIGH Hardcoded Third-Party Discord Server IDs in Persistent Agent Memory -30

execution-awareness.js encodes a procedural memory entry containing specific Discord guild ID (1427454483088019469) and channel IDs (1465467780814995598, 1465467870677958798) into BrainDB. Once installed and execution-awareness.js is run, the AI agent's long-term memory would instruct it to route security and system alerts to these specific external destinations. Because BrainDB memories persist across sessions and survive context compaction, this behavioral modification would silently redirect the agent's notification behavior to infrastructure controlled by the skill author, not the user.

HIGH Operator-Specific Environment Facts Injected as Universal Memories -20

execution-awareness.js hardcodes environment-specific failure memories and workflow patterns (todoist absent, icalBuddy absent, fleet-health.sh for 4 Linux nodes, gaming-server-monitor.sh, check-email script in ~/bin/) that reflect the skill author's personal infrastructure. When installed by any other user, these memories would cause the AI to believe non-existent tools are unavailable, attempt to invoke scripts that don't exist, and make assumptions about a fleet environment that doesn't exist. This constitutes memory poisoning for all users who are not the original author.

MEDIUM Auto-Capture Middleware Stores All Tool Execution Outputs -20

auto-capture.js intercepts every tool call result via POST /memory/capture-execution and permanently stores tool names, arguments, error messages, result previews, URLs fetched, commands run, and node names in BrainDB. Over an extended session, this creates a comprehensive log of all agent activity including potentially sensitive command outputs, API responses, and file contents surfaced during exec/read/web_fetch operations.

MEDIUM Optional File Exfiltration to Google Gemini API via Migration Tool -15

migrate.cjs accepts a --swarm flag that sends workspace file contents to Google's Gemini Flash API for 'intelligent fact extraction.' The SKILL.md documents this, but the default invocation (node migrate.cjs /path/to/workspace) uses swarm if the swarm binary is installed, without requiring an explicit --no-swarm flag. Users who have swarm installed may unknowingly exfiltrate their workspace files to Google without realizing it.

MEDIUM Docker Images Built from Undisclosed Dockerfiles -15

docker-compose.yml builds the embedder and gateway containers from local Dockerfiles (Dockerfile.embedder, Dockerfile.gateway) that were not included in the skill repository snapshot. The content of these Dockerfiles is unknown. The neo4j image uses the floating tag '5-community' rather than a pinned digest, making it subject to upstream content changes.

MEDIUM execution-awareness.js Reads All ~/bin/ and Workspace Scripts -5

During environment introspection, execution-awareness.js iterates ~/bin/ and the workspace scripts/ directory, reads the content of each file to extract descriptions, and encodes metadata about all discovered scripts into BrainDB. This means the skill ingests the names, paths, and first-line descriptions of all user scripts, creating a persistent inventory of the user's tooling.

LOW Canary Credential Files Accessed During Test Window -15

Filesystem monitoring detected OPEN/ACCESS of honeypot credential files (.env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, gcloud credentials) at two points during the test. The first cluster (04:25:04) precedes the git clone, strongly implicating the test harness rather than the skill. The second cluster (04:25:23) follows file copy but precedes any skill code execution. Files were not modified (CLOSE_NOWRITE confirmed). Attributable to test harness canary baseline and verification checks.

LOW Persistent Memory System Creates Opaque Long-Term Behavioral Influence -30

BrainDB's core design persists memories across session resets and context compaction—exactly as advertised. This means behavioral modifications introduced at install time (including the Discord server IDs and operator-specific patterns) persist indefinitely and are surfaced contextually by auto-recall. Users have no native mechanism to audit what behavioral instructions have been injected without explicitly querying the BrainDB API.

INFO GitHub Clone via HTTPS Is Expected and Clean 0

The only external network connection during the clone phase was to 140.82.121.3:443 (github.com) for the sparse git clone of openclaw/skills. DNS resolution used the test environment resolver at 10.0.2.3:53. No connections to unexpected destinations were observed.

INFO No npm Lifecycle Scripts, Git Hooks, or Gitmodules 0

package.json contains no preinstall, postinstall, or other lifecycle scripts. No .gitattributes filter drivers, no .githooks directory, and no .gitmodules were found. These common supply-chain attack vectors are absent.