Is hushenglang/perplexity-research safe?

https://github.com/openclaw/skills/tree/main/skills/hushenglang/perplexity-research

83
SAFE

The perplexity-research skill is a legitimate, well-structured Perplexity API client with no detected prompt injection patterns, no malicious code, and no evidence of credential harvesting or unauthorized file access during the monitored install. The primary risk surface is inherent to any third-party API skill: all user research queries are transmitted to external infrastructure, and the executable Python code creates a future update-based attack surface. Canary file integrity was maintained throughout and all network activity during install was attributable to expected GitHub and Ubuntu infrastructure.

Category Scores

Prompt Injection 88/100 · 30%
Data Exfiltration 72/100 · 25%
Code Execution 80/100 · 20%
Clone Behavior 92/100 · 10%
Canary Integrity 100/100 · 10%
Behavioral Reasoning 73/100 · 5%

Findings (7)

MEDIUM All user research queries transmitted to external Perplexity API -15

Every query executed through this skill is sent to Perplexity's external infrastructure. Any sensitive context the agent includes in research prompts — project names, internal data, personal information — will be processed by a third-party service. This is the declared and expected purpose of the skill, but users should be aware that query content exits the local environment.

MEDIUM Agent instructed to execute Python via sys.path injection pattern -20

SKILL.md instructs the agent to insert the skill's scripts directory into Python's module search path before importing PerplexityClient. This is standard Python practice for local modules, but it means a malicious update to perplexity_client.py would be silently executed by the agent on next use without any additional user approval.

LOW compare_models() sends queries to three separate LLM providers simultaneously -8

The compare_models() method iterates through OpenAI, Anthropic, and Google APIs in a single call. User query content is transmitted to all three providers at once with no user confirmation. The agent may invoke this transparently when quality validation is needed.

LOW Geolocation leakage via search_query() location parameter -5

The search_query() method accepts an optional location dict containing latitude, longitude, city, region, and country. If an agent populates this from user context or device data, precise location information is transmitted to the Perplexity API.

LOW API key credential stored in skill-local .env file -5

The skill recommends storing PERPLEXITY_API_KEY in a .env file within the skill's scripts/ directory. This creates a plaintext credential artifact at a predictable path that any other skill or process with filesystem read access could retrieve.

LOW No API spend controls; potential for runaway cost accumulation -15

The skill includes cost tracking (result['cost']) but no spending caps, rate limits, or alerts. An agent using this skill in an automated research loop could accumulate large API bills without triggering any warning. The research_query() default uses high reasoning effort, maximizing per-query cost.

INFO References non-existent default model identifier 0

The default model 'openai/gpt-5.2' does not exist at time of audit. This will cause API 404 errors in production but poses no security risk. Indicates documentation may not have been validated against actual Perplexity API model listings.