Skip to content

Add the zrc2019 benchmark #23

@nhamilakis

Description

@nhamilakis

The zrc2019 benchmark needs to be adapted as the previous benchmark used humans for the evaluations.

The idea is to replace human evaluations with an ASR, the proposed model to be used is Whisper as it is open source and could be added as a python dependency, and supports the surprise language used in the 2019 benchmark which was Indonesian.

Tasks to-do:

  • Gather all the metrics that were used.
  • Create a python package named tts019-benchmark
    • See which module and .items to use for the ABX metric
    • Integrate a module that uses whisper for the audio quality evaluation
  • Make the necessary integrations
    • Benchmark class (with subtasks)
    • Parameters class
    • Submission / Validation class
    • ScoreDir / Leaderboard class

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions