File size: 9,502 Bytes
c68b3e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbe1c65
 
c68b3e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f6da3ce
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
---
license: cc-by-4.0
language:
- en
- zh
viewer: true
configs:
- config_name: default
  data_files:
  - split: val
    path: data/**/*.jsonl
---
# ScienceMetaBench

[English](README.md) | [中文](README_ZH.md)

🤗 [HuggingFace Dataset](https://huggingface.co/datasets/opendatalab/ScienceMetaBench) | 💻 [GitHub Repository](https://github.com/DataEval/ScienceMetaBench)

**Acknowledgements**: 🔍 [Dingo](https://github.com/MigoXLab/dingo)

ScienceMetaBench is a benchmark dataset for evaluating the accuracy of metadata extraction from scientific literature PDF files. The dataset covers three major categories: academic papers, textbooks, and ebooks, and can be used to assess the performance of Large Language Models (LLMs) or other information extraction systems.

## 📊 Dataset Overview

### Data Types

This benchmark includes three types of scientific literature:

1. **Papers**
   - Mainly from academic journals and conferences
   - Contains academic metadata such as DOI, keywords, etc.

2. **Textbooks**
   - Formally published textbooks
   - Includes ISBN, publisher, and other publication information

3. **Ebooks**
   - Digitized historical documents and books
   - Covers multiple languages and disciplines

### Data Batches

This benchmark has undergone two rounds of data expansion, with each round adding new sample data:

```
data/
├── 20250806/          # First batch (August 6, 2024)
│   ├── ebook_0806.jsonl
│   ├── paper_0806.jsonl
│   └── textbook_0806.jsonl
└── 20251022/          # Second batch (October 22, 2024)
    ├── ebook_1022.jsonl
    ├── paper_1022.jsonl
    └── textbook_1022.jsonl
```

**Note**: The two batches of data complement each other to form a complete benchmark dataset. You can choose to use a single batch or merge them as needed.

### PDF Files

The `pdf/` directory contains the original PDF files corresponding to the benchmark data, with a directory structure consistent with the `data/` directory.

**File Naming Convention**: All PDF files are named using their SHA256 hash values, in the format `{sha256}.pdf`. This naming scheme ensures file uniqueness and traceability, making it easy to locate the corresponding source file using the `sha256` field in the JSONL data.

## 📝 Data Format

All data files are in JSONL format (one JSON object per line).

### Academic Paper Fields

```json
{
  "sha256": "SHA256 hash of the file",
  "doi": "Digital Object Identifier",
  "title": "Paper title",
  "author": "Author name",
  "keyword": "Keywords (comma-separated)",
  "abstract": "Abstract content",
  "pub_time": "Publication year"
}
```

### Textbook/Ebook Fields

```json
{
  "sha256": "SHA256 hash of the file",
  "isbn": "International Standard Book Number",
  "title": "Book title",
  "author": "Author name",
  "abstract": "Introduction/abstract",
  "category": "Classification number (e.g., Chinese Library Classification)",
  "pub_time": "Publication year",
  "publisher": "Publisher"
}
```

## 📖 Data Examples

### Academic Paper Example

The following image shows an example of metadata fields extracted from an academic paper PDF:

![Academic Paper Example](images/paper_example.png)

As shown in the image, the following key information needs to be extracted from the paper's first page:
- **DOI**: Digital Object Identifier (e.g., `10.1186/s41038-017-0090-z`)
- **Title**: Paper title
- **Author**: Author name
- **Keyword**: List of keywords
- **Abstract**: Paper abstract
- **pub_time**: Publication time (usually the year)

### Textbook/Ebook Example

The following image shows an example of metadata fields extracted from the copyright page of a Chinese ebook PDF:

![Textbook Example](images/ebook_example.png)

As shown in the image, the following key information needs to be extracted from the book's copyright page:
- **ISBN**: International Standard Book Number (e.g., `978-7-5385-8594-0`)
- **Title**: Book title
- **Author**: Author/editor name
- **Publisher**: Publisher name
- **pub_time**: Publication time (year)
- **Category**: Book classification number
- **Abstract**: Content introduction (if available)

These examples demonstrate the core task of the benchmark test: accurately extracting structured metadata information from PDF documents in various formats and languages.

## 📊 Evaluation Metrics

### Core Evaluation Metrics

This benchmark uses a string similarity-based evaluation method, providing two core metrics:

### Similarity Calculation Rules

This benchmark uses a string similarity algorithm based on `SequenceMatcher`, with the following specific rules:

1. **Empty Value Handling**: One is empty and the other is not → similarity is 0
2. **Complete Match**: Both are identical (including both being empty) → similarity is 1
3. **Case Insensitive**: Convert to lowercase before comparison
4. **Sequence Matching**: Use longest common subsequence algorithm to calculate similarity (range: 0-1)

**Similarity Score Interpretation**:
- `1.0`: Perfect match
- `0.8-0.99`: Highly similar (may have minor formatting differences)
- `0.5-0.79`: Partial match (extracted main information but incomplete)
- `0.0-0.49`: Low similarity (extraction result differs significantly from ground truth)

#### 1. Field-level Accuracy

**Definition**: The average similarity score for each metadata field.

**Calculation Method**:
```
Field-level Accuracy = Σ(similarity of that field across all samples) / total number of samples
```

**Example**: Suppose evaluating the `title` field on 100 samples, the sum of title similarity for each sample divided by 100 gives the accuracy for that field.

**Use Cases**:
- Identify which fields the model performs well or poorly on
- Optimize extraction capabilities for specific fields
- For example: If `doi` accuracy is 0.95 and `abstract` accuracy is 0.75, the model needs improvement in extracting abstracts

#### 2. Overall Accuracy

**Definition**: The average of all evaluated field accuracies, reflecting the model's overall performance.

**Calculation Method**:
```
Overall Accuracy = Σ(field-level accuracies) / total number of fields
```

**Example**: Evaluating 7 fields (isbn, title, author, abstract, category, pub_time, publisher), sum these 7 field accuracies and divide by 7.

**Use Cases**:
- Provide a single quantitative metric for overall model performance
- Facilitate horizontal comparison between different models or methods
- Serve as an overall objective for model optimization

### Using the Evaluation Script

`compare.py` provides a convenient evaluation interface:

```python
from compare import main, write_similarity_data_to_excel

# Define file paths and fields to compare
file_llm = 'data/llm-label_textbook.jsonl'      # LLM extraction results
file_bench = 'data/benchmark_textbook.jsonl'     # Benchmark data

# For textbooks/ebooks
key_list = ['isbn', 'title', 'author', 'abstract', 'category', 'pub_time', 'publisher']

# For academic papers
# key_list = ['doi', 'title', 'author', 'keyword', 'abstract', 'pub_time']

# Run evaluation and get metrics
accuracy, key_accuracy, detail_data = main(file_llm, file_bench, key_list)

# Output results to Excel (optional)
write_similarity_data_to_excel(key_list, detail_data, "similarity_analysis.xlsx")

# View evaluation metrics
print("Field-level Accuracy:", key_accuracy)
print("Overall Accuracy:", accuracy)
```

### Output Files

The script generates an Excel file containing detailed sample-by-sample analysis:

- `sha256`: File identifier
- For each field (e.g., `title`):
  - `llm_title`: LLM extraction result
  - `benchmark_title`: Benchmark data
  - `similarity_title`: Similarity score (0-1)

## 📈 Statistics

### Data Scale

**First Batch (20250806)**:
- **Ebooks**: 70 records
- **Academic Papers**: 70 records
- **Textbooks**: 71 records
- **Subtotal**: 211 records

**Second Batch (20251022)**:
- **Ebooks**: 354 records
- **Academic Papers**: 399 records
- **Textbooks**: 46 records
- **Subtotal**: 799 records

**Total**: 1010 benchmark test records

The data covers multiple languages (English, Chinese, German, Greek, etc.) and multiple disciplines, with both batches together providing a rich and diverse set of test samples.

## 🎯 Application Scenarios

1. **LLM Performance Evaluation**: Assess the ability of large language models to extract metadata from PDFs
2. **Information Extraction System Testing**: Test the accuracy of OCR, document parsing, and other systems
3. **Model Fine-tuning**: Use as training or fine-tuning data to improve model information extraction capabilities
4. **Cross-lingual Capability Evaluation**: Evaluate the model's ability to process multilingual literature

## 🔬 Data Characteristics

- ✅ **Real Data**: Real metadata extracted from actual PDF files
- ✅ **Diversity**: Covers literature from different eras, languages, and disciplines
- ✅ **Challenging**: Includes ancient texts, non-English literature, complex layouts, and other difficult cases
- ✅ **Traceable**: Each record includes SHA256 hash and original path

## 📋 Dependencies

```python
pandas>=1.3.0
openpyxl>=3.0.0
```

Install dependencies:

```bash
pip install pandas openpyxl
```

## 🤝 Contributing

If you would like to:
- Report data errors
- Add new evaluation dimensions
- Expand the dataset

Please submit an Issue or Pull Request.

## 📧 Contact

If you have questions or suggestions, please contact us through Issues.

---

**Last Updated**: December 26, 2025