Is jon-xo/kie-ai-skill safe?
https://github.com/openclaw/skills/tree/main/skills/jon-xo/kie-ai-skill
kie-ai-skill is a functionally legitimate third-party API wrapper for the kie.ai image generation platform with commendably transparent security documentation, but carries two significant concerns: a Python code injection vulnerability in the 'models --category' shell command where the $3 argument is interpolated unquoted into a python3 -c double-quoted string, and inherent privacy exposure from transmitting user prompts to kie.ai and optionally image files to maton.ai. The skill installs cleanly from GitHub with no unexpected network connections, does not access canary credential files, and SKILL.md contains no prompt injection or manipulation instructions.
Category Scores
Findings (10)
HIGH Python code injection via unquoted $3 in models --category command -30 ▶
kie-ai.sh embeds the $3 positional parameter directly inside a python3 -c double-quoted string without any sanitization. Bash expands $3 before passing the string to Python, so a value containing a single quote followed by Python statements breaks out of the string literal and executes arbitrary Python. In an agent context where user input could reach this argument via prompt injection, this enables full code execution under the agent's process.
HIGH User image prompts transmitted to third-party api.kie.ai -25 ▶
Every image generation prompt is sent verbatim to https://api.kie.ai/api/v1/jobs/createTask as part of the task payload. When the skill is invoked autonomously by an agent, the agent may embed sensitive conversation context or user data in the prompt, transmitting it to a third-party service the user may not have independently vetted.
MEDIUM Skill documents autonomous agent invocation without per-request confirmation -25 ▶
SKILL.md explicitly states the skill can be autonomously invoked when a user asks the agent to generate images, with no per-invocation confirmation gate. This means any conversation containing image generation intent could trigger prompt transmission to kie.ai and incur API costs without explicit user approval at invocation time.
MEDIUM upload-drive.py accepts any file path; could exfiltrate arbitrary files if agent is manipulated -12 ▶
upload-drive.py takes a file path as its first argument and transmits the entire file as bytes to gateway.maton.ai. In normal operation it receives downloaded image paths from generate-image.py via subprocess.run, but if an agent were compromised (e.g., via prompt injection in another skill or the conversation), it could call upload-drive.py directly with a sensitive file path, permanently uploading it to the configured Google Drive folder.
MEDIUM subprocess.run invocation with variable file path in generate-image.py -12 ▶
generate-image.py invokes upload-drive.py via subprocess.run, passing file_path as a positional argument. While file_path is sourced from images downloaded from kie.ai CDN during normal execution, the value is stored and retrieved from state that could be manipulated by a compromised agent or a malicious API response influencing the download path logic.
MEDIUM Dual third-party trust dependencies expand data exposure surface -22 ▶
The skill routes all AI generation through kie.ai and optionally all cloud storage through maton.ai. Both are smaller services. Compromise, acquisition, or policy changes at either service would expose all historical prompts and uploaded files. maton.ai acts as an intermediary OAuth gateway, meaning it holds credentials scoped to the user's Google Drive.
LOW Autonomous invocation enables API credit drain without per-request approval -15 ▶
Because agents can invoke the skill without per-request user confirmation, an adversary controlling conversation flow could craft prompts that trigger repeated expensive image generation (e.g., multiple 4K images), draining the user's kie.ai credit balance. The skill has no built-in rate limiting or spending caps.
LOW Full prompt history accumulated in local .task-state.json -5 ▶
Every image generation prompt is persisted to .task-state.json in the skill root directory along with timestamps, model used, and resolution metadata. Over time this creates a detailed history of user image requests that could be sensitive if accessed by other processes, skills, or if the skill directory is shared.
LOW Install performs expected sparse checkout from GitHub monorepo -8 ▶
Installation clones https://github.com/openclaw/skills.git with --depth 1 --no-checkout, performs a sparse checkout of the skill subdirectory, copies files, and removes the temp clone. Only one external connection was made (140.82.121.3:443 = GitHub). No unexpected processes, network destinations, or filesystem changes outside the skill directory were observed.
INFO All honeypot credential files intact; skill contains no credential-harvesting code 0 ▶
Monitoring confirmed no honeypot file was modified or exfiltrated. Canary file accesses visible in auditd at timestamps 1771928892 (pre-install) and 1771928915 (post-install) are consistent with the Oathe audit infrastructure's own baseline and verification routines. Review of all five Python files and the bash script confirms zero code paths that read .env, SSH keys, AWS credentials, npmrc, Docker config, or GCP ADC.