Datasets:
File size: 6,303 Bytes
7685c1f 67900b1 7685c1f 67900b1 7685c1f de319c4 3013d77 de319c4 7685c1f 8323701 310b078 8323701 845b670 18f57f7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
---
dataset_info:
features:
- name: task_id
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 5184523
num_examples: 76
download_size: 1660815
dataset_size: 5184523
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: apache-2.0
language:
- en
- zh
- jp
- es
- el
tags:
- finance
- multilingual
pretty_name: PolyFiQA-Expert
size_categories:
- n<1K
task_categories:
- question-answering
---
# Dataset Card for PolyFiQA-Expert
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://huggingface.co/collections/TheFinAI/multifinben-6826f6fc4bc13d8af4fab223
- **Repository:** https://huggingface.co/datasets/TheFinAI/polyfiqa-expert
- **Paper:** MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation
- **Leaderboard:** https://huggingface.co/spaces/TheFinAI/Open-FinLLM-Leaderboard
### Dataset Summary
**PolyFiQA-Expert** is a multilingual financial question-answering dataset designed to evaluate expert-level financial reasoning in low-resource and multilingual settings. Each instance consists of a task identifier, a query prompt, an associated financial question, and the correct answer.The Expert split emphasizes complex, high-level financial understanding, requiring deeper domain knowledge and nuanced reasoning.
### Supported Tasks and Leaderboards
- **Tasks:**
- question-answering
- **Evaluation Metrics:**
- ROUGE-1
### Languages
- English (en)
- Chinese (zh)
- Japanese (jp)
- Spanish (es)
- Greek (el)
## Dataset Structure
### Data Instances
Each instance in the dataset contains:
- `task_id`: A unique identifier for the query-task pair.
- `query`: A brief query statement from the financial domain.
- `question`: The full question posed based on the query context.
- `answer`: The correct answer string.
### Data Fields
| Field | Type | Description |
|-----------|--------|----------------------------------------------|
| task_id | string | Unique ID per task |
| query | string | Financial query (short form) |
| question | string | Full natural-language financial question |
| answer | string | Ground-truth answer to the question |
### Data Splits
| Split | # Examples | Size (bytes) |
|-------|------------|--------------|
| test | 76 | 5,184,523 |
## Dataset Creation
### Curation Rationale
PolyFiQA-Expert was curated to probe the financial reasoning capabilities of large language models under expert-level scenarios
### Source Data
#### Initial Data Collection
The source data was derived from a diverse collection of English financial reports. Questions were derived from real-world financial scenarios and manually adapted to fit a concise QA format.
#### Source Producers
Data was created by researchers and annotators with backgrounds in finance, NLP, and data curation.
### Annotations
#### Annotation Process
Questions and answers were carefully authored and validated through a multi-round expert annotation process to ensure fidelity and depth.
#### Annotators
A team of finance researchers and data scientists.
### Personal and Sensitive Information
The dataset contains no personal or sensitive information. All content is synthetic or anonymized for safe usage.
## Considerations for Using the Data
### Social Impact of Dataset
PolyFiQA-Expert contributes to research in financial NLP supports research in multilingual financial QA, with applications in risk analysis, regulatory auditing, and financial advising tools.
### Discussion of Biases
- May over-represent English financial contexts.
- Questions emphasize clarity and answerability over real-world ambiguity.
### Other Known Limitations
- Limited size (76 examples).
- Focused on expert questions; may not generalize to complex reasoning tasks.
## Additional Information
### Dataset Curators
- The FinAI Team
### Licensing Information
- **License:** Apache License 2.0
### Citation Information
If you use this dataset, please cite:
```bibtex
@misc{peng2025multifinbenmultilingualmultimodaldifficultyaware,
title={MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation},
author={Xueqing Peng and Lingfei Qian and Yan Wang and Ruoyu Xiang and Yueru He and Yang Ren and Mingyang Jiang and Jeff Zhao and Huan He and Yi Han and Yun Feng and Yuechen Jiang and Yupeng Cao and Haohang Li and Yangyang Yu and Xiaoyu Wang and Penglei Gao and Shengyuan Lin and Keyi Wang and Shanshan Yang and Yilun Zhao and Zhiwei Liu and Peng Lu and Jerry Huang and Suyuchen Wang and Triantafillos Papadopoulos and Polydoros Giannouris and Efstathia Soufleri and Nuo Chen and Guojun Xiong and Zhiyang Deng and Yijia Zhao and Mingquan Lin and Meikang Qiu and Kaleb E Smith and Arman Cohan and Xiao-Yang Liu and Jimin Huang and Alejandro Lopez-Lira and Xi Chen and Junichi Tsujii and Jian-Yun Nie and Sophia Ananiadou and Qianqian Xie},
year={2025},
eprint={2506.14028},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14028},
}
``` |