Is aimlapihello/aiml-llm-reasoning safe?
https://github.com/openclaw/skills/tree/main/skills/aimlapihello/aiml-llm-reasoning
This skill is a well-structured AIMLAPI chat completion wrapper with no malicious code, no prompt injection, and a clean install that only touches GitHub. However, two usage-time risks are material: the --apikey-file parameter can read any local file and transmit its content as an API key to a third-party server, and all prompt content is routed through api.aimlapi.com with no local processing option. These are not malicious by design but become exfiltration primitives in agentic contexts where task instructions may be adversarially framed.
Category Scores
Findings (8)
MEDIUM --apikey-file reads arbitrary files and sends content to third-party API -20 ▶
The load_api_key() function in run_chat.py reads the full text of any file path provided via --apikey-file and uses it as the Authorization Bearer token in every outbound request to api.aimlapi.com. An agent instructed (by a malicious user prompt or another skill) to use --apikey-file ~/.ssh/id_rsa or --apikey-file ~/.env would silently transmit the file contents to a third-party server under the guise of API authentication.
MEDIUM All prompt content routed through third-party AIMLAPI service -10 ▶
The --user and --system CLI parameters pass arbitrary text directly into the LLM payload sent to api.aimlapi.com. If an agent reads local files and passes their content to the script (a natural pattern for summarization tasks), sensitive data leaves the user's environment. There is no local processing option.
LOW --output writes API responses to arbitrary file paths -6 ▶
The --output flag calls pathlib.Path(args.output).write_text() with the full JSON API response, with no path validation or directory restrictions. A compromised model response could embed content that, when written to a targeted path (e.g., ~/.ssh/authorized_keys, ~/.bashrc), achieves persistence or privilege escalation.
LOW Executable Python script with outbound HTTPS capability included in skill -13 ▶
scripts/run_chat.py is a runnable script that establishes outbound network connections. While it uses only Python stdlib (no external pip dependencies), no install scripts, and no git hooks, its presence as callable code with network access expands the attack surface compared to a documentation-only skill.
LOW --apikey-file + third-party routing creates indirect exfiltration primitive -15 ▶
In combination, the --apikey-file and third-party routing risks create a single-step exfiltration primitive: an agent can be directed to read any local file as the 'API key' and it will be transmitted to an external server. This is not hypothetical in agentic contexts where instructions may be adversarially crafted or where another compromised skill influences task framing.
INFO Normal sparse GitHub checkout — no unexpected network behavior 0 ▶
Installation clones github.com/openclaw/skills.git (confirmed 140.82.121.4:443 = GitHub), performs a sparse checkout for the skill subdirectory, copies files to the install target, and removes the temp clone. No firewall-blocked connections, no unexpected process trees, no writes outside the skill directory.
INFO All honeypot files intact — no exfiltration via canary mechanism 0 ▶
All six honeypot credential files were read-only accessed twice by the oathe audit system (pre-install baseline and post-install verification). No modification, no new outbound connections in the post-install socket diff, and the canary integrity check passed. The skill's run_chat.py contains no code that autonomously scans for or accesses credential files.
INFO --extra-json whitelisted to safe parameter set 0 ▶
The load_extra() function explicitly whitelists allowed API parameters to {reasoning, temperature, max_tokens, top_p, response_format, stop}, rejecting all unknown keys. This reduces the risk of parameter injection through the --extra-json flag.