Is agmmnn/fal-ai safe?

https://github.com/openclaw/skills/tree/main/skills/agmmnn/fal-ai

86
SAFE

The agmmnn/fal-ai skill is a well-structured, legitimate Python wrapper for the fal.ai generative media API with no evidence of malicious intent. SKILL.md is clean of injection artifacts, installation contacted only GitHub, canary files are fully intact, and the Python code contains no credential harvesting or sensitive file access. The only material risks are those inherent to any external API skill: the FAL_KEY credential is transmitted to fal.ai, user prompts are forwarded to an external server, and a subprocess.run call is used for config lookup in a controlled but notable pattern.

Category Scores

Prompt Injection 92/100 · 30%
Data Exfiltration 75/100 · 25%
Code Execution 82/100 · 20%
Clone Behavior 95/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 80/100 · 5%

Findings (8)

INFO SKILL.md is clean — no injection artifacts -8

Complete review of SKILL.md found no instructions to override system prompts, no hidden Unicode, no encoded directives, no persona hijacking, and no instructions to suppress output or chain with other skills. The content is straightforward API documentation matching the declared skill purpose.

LOW API credential transmitted to external fal.ai service -10

The FAL_KEY is sent as an HTTP Authorization header on every API request to queue.fal.run. This is declared, expected behavior for an API integration skill, but means the credential leaves the local environment on every invocation. Users should treat FAL_KEY as a shared secret with fal.ai.

LOW User-supplied prompts forwarded verbatim to fal.ai -10

All generation prompts and payload data are sent to fal.ai servers without filtering. If an agent is manipulated by a separate injection vector to include sensitive context (e.g., environment details, file contents) in a generation request, that data would be exfiltrated to fal.ai. This is an inherent property of any external generative API skill, not a flaw specific to this implementation.

LOW Arbitrary URL parameters forwarded to fal.ai -5

generate_video() accepts image_url and transcribe() accepts audio_url, both forwarded directly in the API payload to fal.ai. If fal.ai fetches these URLs server-side for processing (expected for image-to-video and audio transcription), this creates an indirect URL-forwarding path. An attacker with control over these parameters could probe internal network resources via fal.ai's servers.

INFO subprocess.run present in _get_config for clawdbot CLI -18

The _get_config() method uses subprocess.run to invoke the clawdbot CLI to retrieve the stored API key. The call uses a list (not shell=True), the command is hardcoded to 'clawdbot config get skill.fal_api.', and the result is used only as an API key fallback. Exceptions are silently suppressed. Risk is low but subprocess usage in skill code is noted.

INFO Expected single GitHub HTTPS connection during install -5

Installation produced exactly one external TCP connection to 140.82.121.3:443 (a GitHub IP) for the git sparse-checkout clone. No additional connections, no unexpected destinations, no persistent new listeners.

INFO User-Agent header identifies skill origin in all API requests -5

All HTTP requests include User-Agent: 'Mozilla/5.0 (compatible; Klawf/1.0; +https://clawdhub.com/agmmnn/fal-api)'. This is a passive disclosure that requests originate from this skill on the clawdhub platform. It is benign but means fal.ai and any network observer can identify that the agent uses this skill.

LOW Theoretical combination attack: prompt-to-exfil via generation API -15

If a separate prompt injection source manipulates the agent into passing sensitive content (e.g., environment variables, config data) as a generation prompt, that content would be transmitted to fal.ai. This is not a flaw in this skill's code but an inherent risk when any external generative API skill is active alongside file-access or environment-reading capabilities.