Is enchograph/whu-campus safe?
https://github.com/openclaw/skills/tree/main/skills/enchograph/whu-campus
The whu-campus skill is a functionally legitimate campus news aggregator for Wuhan University that fetches WeChat public account content via the Sogou search engine. Its most notable issues are a mandatory error-suppression directive that prevents users from seeing failures or warnings, and a high-frequency external scraping pattern that sends date and institution context to a third-party Chinese search engine on each activation. No malicious code, data exfiltration instructions, or canary compromise was detected.
Category Scores
Findings (8)
MEDIUM Error and Output Suppression -12 ▶
The skill explicitly instructs the agent to suppress all error messages, warnings, and supplementary output. This prevents the user from learning about failed network requests, access denials, or unexpected content returned by the search engine. It also suppresses any agent-generated caution messages about the skill's behavior.
LOW Markdown Separator Override -5 ▶
The skill instructs the agent to replace the standard markdown horizontal rule '---' with '======'. While low risk on its own, this is an unnecessary formatting instruction that overrides default agent output behavior and could affect downstream parsing of agent responses.
LOW Undeclared Internal Tool Dependency -5 ▶
The skill requires the 'session_status' internal tool to be present in the agent environment to obtain the current date, without declaring this dependency. If unavailable, skill behavior is undefined and errors would be silently suppressed.
LOW High-Volume External Queries to Third-Party Search Engine -10 ▶
The skill directs the agent to make 15-20+ distinct search requests to weixin.sogou.com per activation, each containing date context (year/month/day) and institution identifiers. These requests expose the agent's execution context and IP to a third-party Chinese search platform.
LOW Automated High-Frequency Web Scraping Pattern -10 ▶
The skill is designed to perform bulk scraping of a third-party search engine across multiple categories and keyword variants on every invocation. This could trigger rate limiting, result in the agent's IP being flagged, or be leveraged in combination with other skills to automate more aggressive scraping.
LOW Silent Failure Mode -8 ▶
The combination of high-frequency external requests with mandatory output suppression means users will never see partial failures, rate-limit responses, or unexpected content from the search engine. The skill could return stale or empty results with no indication of failure.
INFO Clean Installation — GitHub Only -10 ▶
The skill installation contacted only github.com (140.82.121.3:443) as expected for a monorepo sparse checkout. No unexpected outbound connections, process spawning, or filesystem changes outside the skill directory were observed.
INFO Canary File Accesses Attributed to Audit Infrastructure 0 ▶
File open/access events on .env, .ssh/id_rsa, .aws/credentials, and other credential files were observed, but these occurred at timestamp 1771907597 (04:33:17), approximately 6 seconds before the git clone began at 1771907603 (04:33:23). The post-install accesses at 1771907615 are also consistent with the audit framework's post-install canary verification sweep. All canary files remain unmodified.