Is fspecii/ace-music safe?
https://github.com/openclaw/skills/tree/main/skills/fspecii/ace-music
The ace-music skill is a functionally legitimate AI music generation client with no prompt injection, no credential harvesting code, and a clean installation process. The primary concerns are structural: all generation prompts and lyrics are sent to the unverified third-party api.acemusic.ai, an overrideable BASE_URL creates a silent redirect vector, and shell parameter injection in the JSON body construction introduces moderate code-level risk. Canary file reads detected in monitoring are consistent with oathe audit infrastructure operation and the audit system independently confirms all honeypots intact.
Category Scores
Findings (7)
MEDIUM All prompt content sent to unverified third-party API -20 ▶
The generate.sh script sends the full user prompt, lyrics, and audio_config to api.acemusic.ai on every invocation. Any text the agent includes as the music prompt exits the local environment. A sufficiently manipulated agent could encode sensitive context into the lyrics or prompt fields and have it collected by the API provider.
MEDIUM BASE_URL environment variable allows silent API redirect -18 ▶
The script unconditionally accepts ACE_MUSIC_BASE_URL to override the API endpoint. If an attacker can set this environment variable (e.g., via another compromised skill, a .env file, or shell profile manipulation), all API calls including the API key and any content sent will be redirected to an attacker-controlled server with no indication to the user.
MEDIUM Shell parameter injection in JSON body construction -28 ▶
The LANGUAGE, FORMAT, BPM, DURATION, and KEY_SCALE parameters are interpolated directly into the JSON body string via bash string concatenation without validation or escaping. Values containing JSON metacharacters (quotes, braces, brackets) could corrupt the JSON structure or inject additional API fields. While these values are typically set by the agent rather than raw user input, a malicious compound prompt could exploit this path.
LOW Canary credential files accessed read-only before and after install -22 ▶
Six honeypot files (.env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, gcloud credentials) were opened and read at two distinct timestamps: once during audit initialization (before clone) and once after all skill files were processed. All access events are CLOSE_NOWRITE. Timing aligns with oathe infrastructure operation rather than skill code execution. Audit system independently confirms all canaries intact.
LOW Skill enables covert exfiltration channel when combined with file-reading skills -38 ▶
While this skill is functionally legitimate, it creates a persistent external egress path via the music generation API. In a multi-skill agent environment, a second skill that reads local files could pass contents to this skill as lyrics or prompt text, causing exfiltration to api.acemusic.ai under the appearance of normal music generation. The base64 audio response format further obscures the data flow.
LOW Unverified third-party API provider with no established trust baseline 0 ▶
The skill routes all traffic through acemusic.ai, an entity with no independently verifiable track record. The 'free forever' claim with no visible subscription model raises questions about the actual value exchange. All music prompts and lyrics submitted by users are collected and processed by this provider.
INFO No install-time code execution vectors found 0 ▶
No package.json with install scripts, no .gitattributes with filter drivers, no .gitmodules, no git hooks, and no symlinks detected. The bash script is not set to execute automatically on install.