Spaces:
Running
Running
File size: 1,827 Bytes
605b3ec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
"""
This file contains the text content for the leaderboard client.
"""
HEADER_MARKDOWN = """
# EMMA JSALT25 Benchmark – Multi-Talker ASR Evaluation
Welcome to the official leaderboard for benchmarking **multi-talker ASR systems**, hosted by the **EMMA JSALT25 team**. This platform enables model submissions, comparisons, and evaluation on challenging multi-speaker scenarios.
"""
LEADERBOARD_TAB_TITLE_MARKDOWN = """
## Leaderboard
Below you’ll find the latest results submitted to the benchmark. Models are evaluated using **`meeteval`** with **TCP-WER (collar=5s)**.
"""
SUBMISSION_TAB_TITLE_MARKDOWN = """
## Submit Your Model
To submit your MT-ASR hypothesis to the benchmark, complete the form below:
- **Submitted by**: Your name or team identifier.
- **Model ID**: A unique identifier for your submission (used to track models on the leaderboard).
- **Hypothesis File**: Upload a **SegLST `.json` file** that includes **all segments across datasets** in a single list.
- **Task**: Choose the evaluation task (e.g., single-channel ground-truth diarization).
- **Datasets**: Select one or more datasets you wish to evaluate on.
📩 To enable submission, please [email the EMMA team](mailto:[email protected]) to receive a **submission token**.
After clicking **Submit**, your model will be evaluated and results displayed in the leaderboard.
"""
RANKING_AFTER_SUBMISSION_MARKDOWN = """
📊 Below is how your model compares after evaluation:
"""
SUBMISSION_DETAILS_MARKDOWN = """
⚠️ Are you sure you want to finalize your submission? This action is **irreversible** once submitted.
"""
MORE_DETAILS_MARKDOWN = """
## Model Metadata:
Detailed information about the selected submission.
"""
MODAL_SUBMIT_MARKDOWN = """
✅ Confirm Submission
Are you ready to submit your model for evaluation?
"""
|