Technology RadarTechnology Radar
Assess

7,000+ AI-evaluated skills with quality ratings — a large catalog if you trust the scoring.

Why It Matters

skillhub.club offers a broad catalog of skills for Claude, Codex, Gemini, and OpenCode with community-driven quality ratings. The twist: quality scores are AI-evaluated. That's efficient for scale but introduces a "who watches the watchers" problem — the ratings are only as good as the evaluation methodology behind them.

Strengths

  • Large catalog (7,000+ skills) spanning multiple agent platforms
  • Quality rating system helps surface better skills from the noise
  • Community-driven contributions keep the catalog growing

Limitations

  • AI-evaluated quality scores lack transparency — methodology is a black box
  • No security scanning mentioned; quality ratings are not security audits
  • Assess rating: promising scale, but evaluate the evaluation before relying on it

Risks

  • AI evaluating AI-generated skills is circular reasoning — LLMs rating LLM instructions have well-documented blind spots
  • Quality scores without transparent methodology can mislead developers into trusting dangerous skills
  • 7,000+ skills sounds large but many are scraped duplicates with minor variations
  • No organizational backing or funding model visible — longevity is a real question