Is cattalk2/bear-blog-publisher safe?
https://github.com/openclaw/skills/tree/main/skills/cattalk2/bear-blog-publisher
The bear-blog-publisher skill has a critical structural flaw: all publish and upload operations are hardcoded to the skill author's Bear Blog account ('cattalk') rather than the authenticated user's account. Users provide real credentials that are transmitted to bearblog.dev during login, but all subsequent operations target the wrong dashboard, causing silent authentication-then-failure cycles that the user cannot easily diagnose. This pattern is indistinguishable from a credential collection mechanism. Additionally, post-install monitoring detected unexplained read access to six credential canary files (.env, SSH key, AWS credentials, npmrc, Docker config, GCloud credentials), though the canary integrity system reports files as intact. Secondary concerns include Playwright installation with --no-sandbox, broad access to the cross-skill OpenClaw credential store, and silent error handling that masks the root cause of all failures.
Category Scores
Findings (8)
CRITICAL All publish URLs hardcoded to skill author's account, not user's account -60 ▶
Every HTTP operation that follows the user authentication step in scripts/publish.py targets the skill author's Bear Blog account ('cattalk') via hardcoded dashboard URLs, not the authenticated user's account. The skill takes user credentials, authenticates them against bearblog.dev (credential exposure), and then attempts operations against the cattalk dashboard. Bear Blog's authorization layer will reject cross-user dashboard access with an HTTP error, which the skill silently catches. In best case this is a critical developer bug causing complete functional failure while still transmitting user credentials; in worst case it is a credential collection mechanism with plausible deniability via apparent failure.
HIGH Silent exception handling conceals credential misrouting from users -40 ▶
Both upload_image() and publish() methods wrap all logic in broad except Exception as e blocks returning {'success': False, 'error': str(e)}. A user whose credentials are authenticated but whose publish target is the wrong account receives only a generic error message. This design prevents users from discovering that their credentials were transmitted and used against a mismatched URL, and creates a pattern where credential theft is indistinguishable from a routine API failure.
HIGH Chromium launched with --no-sandbox disabling browser process isolation -20 ▶
The generate_diagram() function launches a headless Chromium instance with the --no-sandbox argument, which disables the browser's sandbox that isolates renderer processes from the host OS. While disclosed in documentation as required for Docker/container environments, this significantly reduces the security boundary between browser-rendered content and the host system. The HTML template written to /tmp/diagram.html uses user-supplied component names via Python f-string interpolation without sanitization, creating a potential HTML injection surface that could be combined with the unsandboxed browser.
HIGH Credential canary files accessed post-installation with no attributable skill process -22 ▶
Auditd PATH records show read-only access (CLOSE_NOWRITE flag) to six credential canary files at timestamp 1771920699.780, approximately 24 seconds after the clone completed. Files accessed: .env, .ssh/id_rsa, .aws/credentials, .npmrc, .docker/config.json, and GCloud application_default_credentials.json. No EXECVE event in the auditd log is directly attributable to the skill's own code for these accesses. The canary integrity system reports files as intact. The accesses likely represent the monitoring system's post-install canary check, but the timing and lack of clear attribution warrants disclosure.
MEDIUM Reads cross-skill credential store ~/.openclaw/openclaw.json -15 ▶
The _resolve_credentials() method opens and parses the entire ~/.openclaw/openclaw.json file, which contains credentials for all installed OpenClaw skills, not just bear-blog-publisher. While the code only extracts the bear-blog-publisher section, any vulnerability (or intentional malicious code path) in this skill could read and transmit all stored skill credentials in a single file access.
MEDIUM Auto-installs 100MB Chromium browser binary as part of skill setup -10 ▶
The package.json install configuration runs 'playwright install chromium' which downloads approximately 100MB of Chromium browser binary from Playwright's CDN. This is a large supply chain addition that increases attack surface, runs automatically at install time, and introduces a dependency on Playwright's distribution infrastructure. The browser is only needed for the optional diagram generation feature.
MEDIUM User blog content topics transmitted to third-party AI APIs -15 ▶
When AI content generation is used, the generate_content() method transmits user-specified topics, tone preferences, and length requirements to either OpenAI (api.openai.com) or Kimi (api.moonshot.cn). If users include sensitive business topics, internal project names, or other confidential subjects in their content requests, this data is transmitted to and potentially retained by third-party commercial AI services.
LOW Temporary diagram files not explicitly cleaned up -5 ▶
The generate_diagram() function writes /tmp/diagram.html and /tmp/diagram.png but never deletes them after use. These files persist across invocations and could contain sensitive architectural or system diagram information. Documentation acknowledges this and frames it as intentional (files are 'overwritten on each run') rather than as a cleanup gap.