Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
< 1K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -23,3 +23,72 @@ task_ids:
|
|
23 |
- multi-label-classification
|
24 |
- open-domain-qa
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
- multi-label-classification
|
24 |
- open-domain-qa
|
25 |
---
|
26 |
+
|
27 |
+
# CaseReportBench: Clinical Dense Extraction Benchmark
|
28 |
+
|
29 |
+
**CaseReportBench** is a curated benchmark dataset designed to evaluate how well large language models (LLMs) can perform **dense information extraction** from **clinical case reports**, with a focus on **rare disease diagnosis**.
|
30 |
+
|
31 |
+
It supports fine-grained, system-level phenotype extraction and structured diagnostic reasoning — enabling model evaluation in real-world medical decision-making contexts.
|
32 |
+
|
33 |
+
---
|
34 |
+
|
35 |
+
## 🔔 Note
|
36 |
+
|
37 |
+
This dataset accompanies our upcoming publication:
|
38 |
+
|
39 |
+
> **Zhang et al. CaseReportBench: An LLM Benchmark Dataset for Dense Information Extraction in Clinical Case Reports.**
|
40 |
+
> *To appear in the Proceedings of the Conference on Health, Inference, and Learning (CHIL 2025), PMLR.*
|
41 |
+
|
42 |
+
The official PMLR citation and link will be added upon publication.
|
43 |
+
|
44 |
+
---
|
45 |
+
|
46 |
+
## 🧾 Key Features
|
47 |
+
|
48 |
+
- **Expert-annotated**, system-wise phenotypic labels mimicking clinical assessments
|
49 |
+
- Based on real-world **PubMed Central-indexed clinical case reports**
|
50 |
+
- Format: JSON with structured head-to-toe organ system outputs
|
51 |
+
- Designed for: Biomedical NLP, IE, rare disease reasoning, and LLM benchmarking
|
52 |
+
- Metrics include: Token Selection Rate, Levenshtein Similarity, Exact Match
|
53 |
+
|
54 |
+
---
|
55 |
+
|
56 |
+
## Dataset Structure
|
57 |
+
|
58 |
+
Each record includes:
|
59 |
+
|
60 |
+
- `id`: Unique document ID
|
61 |
+
- `text`: Full raw case report
|
62 |
+
- `extracted_labels`: System-organized dense annotations (e.g., neuro, heme, derm, etc.)
|
63 |
+
- `diagnosis`: Final confirmed diagnosis (Inborn Error of Metabolism)
|
64 |
+
- `source`: PubMed ID or citation
|
65 |
+
|
66 |
+
---
|
67 |
+
|
68 |
+
## Usage
|
69 |
+
|
70 |
+
```python
|
71 |
+
from datasets import load_dataset
|
72 |
+
|
73 |
+
ds = load_dataset("cxyzhang/caseReportBench_ClinicalDenseExtraction_Benchmark")
|
74 |
+
print(ds["train"][0])
|
75 |
+
```
|
76 |
+
|
77 |
+
## Citation
|
78 |
+
|
79 |
+
```bibtex
|
80 |
+
@inproceedings{zhang2025casereportbench,
|
81 |
+
title = {CaseReportBench: An LLM Benchmark Dataset for Dense Information Extraction in Clinical Case Reports},
|
82 |
+
author = {Zhang, Cindy and Others},
|
83 |
+
booktitle = {Proceedings of the Conference on Health, Inference, and Learning (CHIL)},
|
84 |
+
series = {Proceedings of Machine Learning Research},
|
85 |
+
volume = {vX}, % Update when available
|
86 |
+
year = {2025},
|
87 |
+
publisher = {PMLR},
|
88 |
+
note = {To appear}
|
89 |
+
}
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
```
|
94 |
+
|