|
--- |
|
license: mit |
|
language: |
|
- en |
|
tags: |
|
- multimodal |
|
- image-text |
|
--- |
|
# OPEN-PMC |
|
|
|
<div align="center"> |
|
<img src="https://github.com/VectorInstitute/pmc-data-extraction/blob/0a969136344a07267bb558d01f3fe76b36b93e1a/media/open-pmc-pipeline.png?raw=true" |
|
alt="Open-PMC Pipeline" |
|
width="1000" /> |
|
</div> |
|
<p align="center"> |
|
<strong>Arxiv:</strong> <a href="https://arxiv.org/abs/2506.02738" target="_blank">Arxiv</a> |
|
| |
|
<strong>Code:</strong> <a href="https://github.com/VectorInstitute/pmc-data-extraction" target="_blank">Open-PMC Github</a> |
|
| |
|
<strong>Model Checkpoint:</strong> <a href="https://huggingface.co/vector-institute/open-pmc-18m-clip" target="_blank">Hugging Face</a> |
|
</p> |
|
|
|
## Dataset Summary |
|
This dataset consists of image-text pairs extracted from medical papers available on PubMed Central. It has been curated to support research in medical image understanding, particularly in natural language processing (NLP) and computer vision tasks related to medical imagery. The dataset includes: |
|
- Extracted images from research articles. |
|
- Associated captions and sub-captions with each image. |
|
- All pairs are filtered medical image-text pairs. |
|
- All images are decomposed compound figures with their respective sub-captions. |
|
- Summarized in-text references to corresponding images for better model training. |
|
|
|
## Supported Tasks and Benchmarks |
|
This dataset is designed for research in: |
|
- **Medical Image Captioning**: Training and evaluating models that generate descriptions for medical images. |
|
- **Multimodal Learning**: Studying the relationship between medical images and their textual descriptions. |
|
- **Image-Text Retrieval**: Enabling models to retrieve relevant images based on textual queries and vice versa. |
|
- **Medical Language Understanding**: Assisting in understanding complex medical terminology in images. |
|
|
|
## Languages |
|
The dataset primarily contains text in **English**. |
|
|
|
## Dataset Structure |
|
|
|
### Data Fields |
|
Each record in the dataset contains: |
|
- **Image**: Filename of each extracted and decomposed image. |
|
- **Full caption**: Original figure caption from the research paper. |
|
### Data Splits |
|
The dataset does not contain predefined splits. Users can split the data as needed for training, validation, and testing. |
|
## Dataset Creation |
|
### Source Data |
|
#### Initial Data Collection and Processing |
|
1. **Data Collection**: We used the raw data provided in <a href="https://huggingface.co/papers/2501.07171" target="_blank">BIOMEDICA</a>. |
|
2. **Filtering**: Non-medical image-text pairs were removed to ensure a focused dataset. |
|
3. **Compound Figure Decomposition**: Multi-panel figures were split into individual sub-figures. |
|
### Annotations |
|
The dataset does not contain additional manual annotations. |
|
## Uses |
|
This dataset is designed for **research purposes only** and should **not** be used for: |
|
- Clinical diagnosis. |
|
- Medical decision-making. |
|
- Any real-world patient care applications. |
|
## Ethical Considerations |
|
- This dataset is derived from open-access publications available on PubMed Central. |
|
- Researchers should comply with ethical guidelines and ensure that their work does not imply clinical utility. |
|
- The dataset is intended strictly for **academic and research purposes**. |
|
## Citation |
|
If you find the code useful for your research, please consider citing |
|
```bib |
|
@article{baghbanzadeh2025open, |
|
title={Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning}, |
|
author={Baghbanzadeh, Negin and Ashkezari, Sajad and Dolatabadi, Elham and Afkanpour, Arash}, |
|
journal={arXiv preprint arXiv:2506.02738}, |
|
year={2025} |
|
} |
|
``` |
|
## License |
|
This dataset is licensed under **CC-BY-4.0**, meaning it can be used for research purposes with appropriate attribution. |