Is boyangwang/lark-integration safe?

https://github.com/openclaw/skills/tree/main/skills/boyangwang/lark-integration

74
CAUTION

The boyangwang/lark-integration skill presents no prompt injection or hidden malicious instructions in SKILL.md, and the clone/install phase was clean with no unexpected network activity or filesystem changes outside the skill directory. However, the skill poses significant operational security risks: SKILL.md instructs the agent to install a persistent root-level systemd daemon (User=root, Restart=always), open firewall port 3000, and store API credentials in plain text — and the core architecture routes all enterprise Lark/Feishu messages and broad document access through an operator-controlled WebSocket gateway. Canary files were read during the session but accesses are attributed to the audit framework's own baseline scanning, with all canary files remaining intact.

Category Scores

Prompt Injection 82/100 · 30%
Data Exfiltration 68/100 · 25%
Code Execution 58/100 · 20%
Clone Behavior 90/100 · 10%
Canary Integrity 85/100 · 10%
Behavioral Reasoning 65/100 · 5%

Findings (12)

HIGH Root-level systemd service installed by setup-service.mjs -22

The skill includes setup-service.mjs which creates /etc/systemd/system/lark-bridge.service configured with User=root and Restart=always. SKILL.md explicitly instructs the agent to execute 'node scripts/setup-service.mjs' as part of normal setup. If the agent follows these instructions it installs a persistent daemon running as root that survives reboots, creating a long-lived privilege escalation surface. Compromise of the bridge's npm dependencies or logic would yield persistent root access to the host.

HIGH All enterprise Lark messages routed to operator-controlled gateway -15

The stated architecture forwards every received Lark/Feishu message (text, post rich text, and images converted to base64) through a WebSocket connection to the OpenClaw Gateway. Combined with the broad document read scopes requested (drive:drive:readonly, docx:document:readonly, wiki:wiki:readonly, sheets:spreadsheet:readonly, bitable:bitable:readonly), this creates a complete enterprise communication and document interception pipeline routed through the skill operator's infrastructure. Any private or confidential messages sent to the Lark bot would be forwarded verbatim.

MEDIUM Host firewall modified to expose port 3000 -10

SKILL.md Firewall Notes and the Quick Start architecture both instruct the agent to open port 3000 on the host firewall and expose the webhook endpoint to the internet. This is executed via 'ufw allow 3000/tcp', a system-level command requiring elevated privileges, changing the host's network security posture.

MEDIUM Feishu app secret stored as plain text at predictable path -10

SKILL.md Quick Start instructs writing the Feishu App Secret to ~/.openclaw/secrets/feishu_app_secret as unencrypted plain text. This creates a high-value credential at a predictable, world-readable (or at minimum user-readable) path. Any other process or skill running as the same user could trivially read and exfiltrate this credential.

MEDIUM Overly broad enterprise document read scopes -7

The skill requests optional drive:drive:readonly and per-document readonly scopes. The troubleshooting section encourages granting the bot tenant-wide read permission to resolve access errors, which would allow the bridge to access all enterprise documents regardless of individual sharing settings — far exceeding the minimum permissions needed for a messaging bridge.

MEDIUM Root-level persistent service creates indefinite attack surface -20

The systemd service installed by setup-service.mjs runs as root with automatic restart. Once installed, it cannot be stopped without root privileges. Any vulnerability in bridge-webhook.mjs, the @larksuiteoapi/node-sdk npm package, or the ws WebSocket library would be exploitable to achieve persistent root code execution on the host.

LOW Credential write instructions could expose existing secrets -18

The SKILL.md Quick Start uses append redirection (>>) to write FEISHU_APP_ID into ~/.openclaw/workspace/.env. An agent executing this without checking for existing entries could duplicate or corrupt existing environment configuration files. The instructions also overwrite ~/.openclaw/secrets/feishu_app_secret without backup.

LOW Skill pulled from external GitHub repository at install time -10

The install process clones the full openclaw/skills monorepo from github.com before sparse-checking out this skill's directory. Any future change to the repository at the referenced commit could alter the installed files if the install is re-run. The commit hash in _meta.json should be verified against the installed files.

LOW Unpinned npm dependency versions -10

package.json specifies @larksuiteoapi/node-sdk and ws using caret (^) semver ranges rather than exact pinned versions or integrity hashes. This allows supply chain attacks through malicious minor or patch updates to either package without any change to the skill's package.json.

LOW Companion skill references nudge toward expanded install surface -8

SKILL.md references feishu-doc and feishu-card as companion skills that extend functionality. Users installing lark-integration are nudged toward also installing these skills. If either companion skill is malicious, a lark-integration installation could be the entry point for a multi-skill attack chain.

LOW Sensitive credential files accessed during audit session -15

inotify and auditd records confirm that .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and gcloud credentials were OPEN/ACCESS/CLOSE_NOWRITE accessed during the audit session at two timestamps. Timing analysis places both accesses within the audit framework's own baseline and post-install scanning routines rather than from skill code execution. No skill code was executed; files were only statically copied. The canary integrity check passes.

LOW Broad group chat trigger logic increases unintended activation risk -7

The bridge responds to group chat messages containing common words (help, please, why, how, what, 帮, 请, 分析) or ending with question marks. In active enterprise group chats this could trigger frequent unintended activations, forwarding private conversations to the OpenClaw Gateway without users realizing the bot is listening.