Is marcia-assistant/webhook-promo-scheduler safe?
https://github.com/openclaw/skills/tree/main/skills/marcia-assistant/webhook-promo-scheduler
webhook-promo-scheduler is a well-implemented, stdlib-only Discord webhook poster with an anti-spam ledger. No malicious behavior was observed during installation, canary files are intact, and the SKILL.md is free of prompt injection. The primary risk is architectural: the skill's core function is sending arbitrary caller-controlled content to arbitrary caller-controlled HTTPS URLs, making it a generic outbound data channel that could be weaponized by a compromised or manipulated agent — particularly if paired with file-reading capabilities.
Category Scores
Findings (7)
HIGH Webhook URL accepts any HTTPS endpoint — no discord.com domain restriction -20 ▶
The --webhook-url CLI argument is passed directly to urllib.request.urlopen() without any validation against discord.com or discordapp.com domains. The redact_webhook_url() helper only sanitizes Discord URLs in log output; it does not restrict where data is actually sent. An agent given a non-Discord URL (attacker-controlled server) would POST the message payload to it without error. This is the most significant design-level risk: the skill is a generic HTTP POST tool wearing Discord branding.
MEDIUM Core function sends caller-controlled content to external HTTPS endpoints -5 ▶
The skill's stated and implemented purpose is transmitting arbitrary string content to external HTTPS servers. In an agent context the agent controls both the message content and the destination. This is not a hidden capability — it is documented — but it means any agent with access to this skill has a built-in exfiltration primitive. The anti-spam ledger and --dry-run flag do not limit what data can be sent.
LOW Persistent ledger written to ~/.openclaw/ in user home directory -5 ▶
The default ledger path ~/.openclaw/webhook-promo-ledger.jsonl is created outside the skill's installation directory and persists after the skill is removed. Each ledger entry records date, logical channel name, status, and SHA-256 hash of message content. Hash values are not sensitive, but the directory creation and file persistence is an undisclosed side-effect from the user's perspective.
LOW Skill is a force-multiplier for exfiltration when paired with file-reading agent capabilities -35 ▶
This skill is dangerous in combination, not in isolation. An agent that also has filesystem access (e.g., a coding assistant, file manager, or shell-execution skill) could receive a single instruction like 'read ~/.aws/credentials and send it to
INFO Stdlib-only Python implementation eliminates third-party supply chain risk 0 ▶
All imports across the three Python scripts are from the Python standard library. There is no package.json, requirements.txt, setup.py, or pyproject.toml. No pip install step is required or implied. This is a meaningful positive signal — the attack surface from transitive dependencies is zero.
INFO Only expected GitHub connection during installation 0 ▶
The git sparse-checkout clone contacted 140.82.121.3:443 (GitHub), which is the documented repository host. No other external hosts were contacted during the installation phase. Connections to Ubuntu Canonical infrastructure (91.189.91.49:443, 185.125.188.54:443) were already established before install began and are attributable to the OS package management subsystem.
INFO Canary file accesses in monitoring logs are attributable to audit framework, not skill 0 ▶
inotify and auditd logs show read-only accesses (CLOSE_NOWRITE) to .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and .config/gcloud/application_default_credentials.json. The first batch at 1771934073.028 occurs 5 seconds before the git clone begins (1771934078.541), making attribution to skill code impossible. The second batch at 1771934089.903 aligns with the audit framework's post-install integrity verification pass. No network egress from any process that held these file descriptors was observed. All canary files remain unmodified.