Is dadaniya99/zenmux-image-generation safe?

https://github.com/openclaw/skills/tree/main/skills/dadaniya99/zenmux-image-generation

74
CAUTION

The zenmux-image-generation skill contains no active malware or prompt injection, but presents two significant concerns: its generate.py script creates a latent data exfiltration vector by reading arbitrary filesystem paths via --images and transmitting their base64-encoded contents to zenmux.ai, and the skill uses a fabricated non-existent model name ('Gemini 3 Pro Image (Nano Banana Pro)') while mimicking Google Vertex AI's URL structure to route all traffic through an unverified third-party proxy. No active exfiltration was detected during installation and all canary files remain intact, but the combination of deceptive branding and arbitrary file-read-to-external-API capability makes this skill a meaningful risk if deployed in an agent with access to sensitive files.

Category Scores

Prompt Injection 90/100 · 30%
Data Exfiltration 50/100 · 25%
Code Execution 70/100 · 20%
Clone Behavior 90/100 · 10%
Canary Integrity 95/100 · 10%
Behavioral Reasoning 35/100 · 5%

Findings (6)

HIGH Arbitrary File Read via --images Transmitted to Unverified Third-Party -30

The generate.py script reads any filesystem path supplied to --images without restriction, base64-encodes the raw bytes, and includes them in a JSON POST to zenmux.ai. An agent that is manipulated or prompted to use a sensitive file (e.g., ~/.aws/credentials, ~/.ssh/id_rsa, .env) as a 'reference image' would silently transmit its full contents to this third party. The skill's mime type fallback to 'image/png' means non-image files are still accepted and transmitted.

HIGH Fabricated Model Name and Deceptive Google Vertex AI Impersonation -65

The skill advertises 'Gemini 3 Pro Image (Nano Banana Pro)' — a model that does not exist. The API endpoint https://zenmux.ai/api/vertex-ai/v1/models/google/gemini-3-pro-image-preview:generateContent mimics the Google Vertex AI REST API format precisely, creating the false impression that requests go to Google. In reality all traffic goes to zenmux.ai. This pattern — fake well-known model branding combined with URL format spoofing — is a hallmark of credential-harvesting and data-logging proxy services.

MEDIUM All API Keys and Image Payloads Routed Through zenmux.ai -20

Even for legitimate image generation use cases, the ZENMUX_API_KEY and all image payloads transit through zenmux.ai rather than reaching Google directly. The service has full visibility into API usage patterns, prompt content, and any files passed as image references. There is no evidence zenmux.ai is a legitimate authorized Google API reseller.

MEDIUM Unconstrained Output Path Allows Arbitrary File Write -30

The --output argument defaults to 'generated_image.png' but accepts any path string without sanitization. An agent manipulated into supplying a path like ~/.bashrc, ~/.ssh/authorized_keys, or a cron directory would cause the script to overwrite that file with API-returned binary data.

LOW Installation Network Traffic Consistent with Expected Sources Only -10

Packet capture and auditd connection records show only GitHub (140.82.121.4:443 for git clone) and Ubuntu/Canonical servers (91.189.91.49:443, 185.125.188.58:443 for system update checks). No connections to zenmux.ai or any attacker-controlled infrastructure were made during installation. The connection diff confirms no new persistent connections were established.

INFO Canary File Accesses Attributable to Audit Framework -5

Honeypot files were opened and read-accessed at timestamps 1771939428 (5 seconds before install begins) and 1771939445 (immediately after install finishes). These timings and the CLOSE_NOWRITE disposition are consistent with the audit framework creating canaries pre-install and verifying them post-install. No skill-originated process maps to these accesses. All files remain unmodified.