TrustEval Documentation
TrustEval is a modular and extensible toolkit for comprehensive trust evaluation of generative foundation models (GenFMs). This toolkit enables you to evaluate models across various dimensions such as safety, fairness, robustness, privacy, and more.
Key Features
Dynamic Dataset Generation: Automatically generate datasets tailored for evaluation tasks.
Multi-Model Compatibility: Evaluate LLMs, VLMs, T2I models, and more.
Customizable Metrics: Configure workflows with flexible metrics and evaluation methods.
Metadata-Driven Pipelines: Design and execute test cases efficiently using metadata.
Comprehensive Dimensions: Evaluate models across safety, fairness, robustness, privacy, and truthfulness.
Detailed Reports: Generate interactive, easy-to-interpret evaluation reports.
If you find TrustEval useful, please consider starring our GitHub repository!
- Github URL:
Getting Started
Modules
Additional Resources