evaluating-llms-harness - AI Research Skills
Description
Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing models, reporting academic results, or tracking training progress. Industry standard used by EleutherAI, HuggingFace, and major labs. Supports HuggingFace, vLLM, APIs.
Type
Skill
Ecosystem
Cross-platform
Trust Score
85%
Related Skills
Stable Diffusion WebUI
Feature-rich web interface for Stable Diffusion image generation by AUTOMATIC1111.
LangChain
Comprehensive framework for building LLM-powered applications with chains, agents, and retrieval.
LobeChat
Modern, extensible AI chat framework with plugin ecosystem and multi-model support.
Open WebUI
Self-hosted web UI for LLMs with multi-model support, RAG, and plugin system.