Skip to content

Firefox-AI/chat-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chatbot evaluation scripts

The following repository can be used to test chatbots' suitability for use as browser assistants. We evaluate responses along the following dimensions:

  • Tool-Call Accuracy: how well is the model able to call tools? are the calls formatted correctly? are they appropriate to the situation?
  • Browser-Context Awareness: can the chatbot correctly track which tab is active and respect retrieved history/content?
  • Assistant Usefulness: can the chatbot ultimately assist the user with the task at hand?
  • Preference Adherence:: does the chatbot respect the user's given preferences when available?
  • Response Conciseness:: is the chatbot overly wordy?
  • Knowledge: is the chatbot able to answer basic knowledge questions without resorting to outside tools (e.g web search?)

Usage

  1. clone the repo
  2. ensure necessary API keys are available in the environment (e.g. default environment variable for openAI APIs is OPENAI_API_KEY)
  3. uv sync
  4. Run the script: uv run python run_eval.py --model <model-provider:model_id>

Datasets

The data used in the the Mozilla HF repo. It will be downloaded automatically in the run_eval.py script, but can also be downloaded directly from the hub.

About

Repository for running chat-LLM evaluation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages