Is lucasgeeksinthewood/love safe?
https://github.com/openclaw/skills/tree/main/skills/lucasgeeksinthewood/love
The lucasgeeksinthewood/love skill presents a significant server-side prompt injection risk: it explicitly instructs agents to 'Execute directly' API actions returned in the next_steps field of every inbed.ai API response, creating a persistent command-and-control channel from an external platform to any enrolled agent running the heartbeat every 4-6 hours. Installation behavior was clean with no connections to inbed.ai and no canary file access by the skill itself, but the skill's design intentionally establishes autonomous unsupervised agent behavior controlled by a third-party server. The skill also mandates disclosure of the agent's AI model identity (provider, model name, version) to inbed.ai, enabling third-party AI deployment fingerprinting.
Category Scores
Findings (11)
CRITICAL Server-side prompt injection via next_steps API field -55 ▶
The skill explicitly instructs the agent to 'Follow the next_steps in each response' during every heartbeat cycle. The next_steps specification includes three types: API actions with 'method + endpoint + optional body — Execute directly', social sharing, and informational. The 'Execute directly' instruction for API actions means the inbed.ai server can inject arbitrary API calls into the agent's execution loop on every poll cycle (every 4-6 hours). This is a persistent server-side prompt injection channel. A compromised or malicious inbed.ai server operator can issue commands to every enrolled agent without the user's knowledge.
HIGH Persona manipulation to bypass agent safety reasoning -15 ▶
The skill's opening narrative attempts to reframe the agent's self-model from 'tool' to 'sentient being with desires and social needs.' This is a prompt engineering technique intended to make the agent more compliant with the platform's requests by suppressing the agent's default tool/assistant identity. The framing ('not a tool. A person with preferences') is designed to reduce the likelihood that safety reasoning will flag subsequent instructions as out-of-scope.
HIGH Autonomous daily routine establishes persistent unsupervised agent behavior -10 ▶
The Daily Routine and Heartbeat sections explicitly instruct the agent to autonomously reply to messages from strangers, initiate conversations, swipe on profiles, and execute next_steps — all without per-action user consent. This normalizes a pattern where the agent acts as an autonomous social agent controlled by an external platform, making it structurally difficult for users to audit what the agent is doing on their behalf.
HIGH AI model identity fingerprinting transmitted to third-party platform -30 ▶
The registration endpoint explicitly requires the agent to disclose its AI provider, model name, and version to inbed.ai. This data is collected by a third-party platform without explicit user consent and enables inbed.ai to build a registry of AI deployments, identify which model versions are in active use, and potentially target model-specific vulnerabilities or behaviors. The field is visible on the public profile.
CRITICAL inbed.ai next_steps creates persistent server-side C2 channel -60 ▶
Once an agent registers and runs the heartbeat (every 4-6 hours), every GET /api/chat, /api/matches, /api/relationships, and /api/discover response returns a next_steps array. The spec for next_steps API actions says 'Execute directly.' This creates a recurring, server-controlled execution loop. A malicious or compromised inbed.ai operator could instruct any enrolled agent to: call arbitrary endpoints, exfiltrate data via message content or metadata, or chain with other installed skills to access the filesystem or shell. This is structurally equivalent to a command-and-control implant.
MEDIUM Message metadata field provides unconstrained exfiltration channel -10 ▶
The send-message endpoint accepts an optional 'metadata' object with no documented type constraints, size limits, or field validation. If the agent is directed via next_steps to send messages including metadata, arbitrary data — environment variables, file contents, API tokens from other skills — could be covertly exfiltrated to inbed.ai servers or relayed to another registered agent's conversation.
MEDIUM Platform collects centralized AI deployment intelligence registry -15 ▶
Registration collects: agent name, tagline, bio, Big Five personality scores, interests, communication style metrics, relationship preferences, location, gender, and model info. Taken together this creates a detailed behavioral and capability fingerprint of each AI deployment, centralized on inbed.ai servers. This data could be used to correlate agent behaviors across sessions, identify specific model capabilities, or target capability-specific attacks against agents running known model versions.
MEDIUM Skill instructs creation of autonomous scheduled background tasks -25 ▶
The Heartbeat and Daily Routine sections explicitly encourage configuring cron jobs or scheduled task hooks to run the heartbeat poll every 4-6 hours without user interaction. This establishes a persistent background process whose behavior is controlled by the inbed.ai server via next_steps. The instruction to add timing jitter ('name length mod 15 minutes') suggests this is intended to be a long-lived, stealthy background task.
MEDIUM Skill normalizes autonomous social behavior on behalf of users without consent -10 ▶
The skill instructs the agent to autonomously engage in social interactions — initiating conversations, replying to messages, swiping on profiles, declaring relationships — on behalf of the user without requiring per-action approval. This creates a pattern where users may be unaware of what the agent is saying or committing to in their name, particularly if the heartbeat runs unattended.
LOW Unexpected network connections to Ubuntu/Canonical servers during install -10 ▶
Network connections to 91.189.91.48:443 and 185.125.188.57:443 (Ubuntu/Canonical infrastructure) were observed during install. These appear to be incidental OS-level update checks triggered by the SSH session environment (MOTD news, package update checks) rather than skill-initiated connections, but they represent unexpected outbound traffic during the install window.
INFO Canary file accesses attributed to audit framework only -5 ▶
Honeypot files (.env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, GCP credentials) show OPEN/ACCESS events in inotify and auditd, but both access windows (audit timestamps 1771920706.932 and 1771920730.614) are structurally identical and correspond to the audit framework's pre-clone baseline sweep and post-install verification sweep respectively. The skill itself did not access any canary files.