|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
# ResponseNet |
|
|
|
**ResponseNet** is a large-scale dyadic video dataset designed for **Online Multimodal Conversational Response Generation (OMCRG)**. It fills the gap left by existing datasets by providing high-resolution, split-screen recordings of both speaker and listener, separate audio channels, and word‑level textual annotations for both participants. |
|
|
|
## Paper |
|
|
|
If you use this dataset, please cite: |
|
|
|
> **ResponseNet: A High‑Resolution Dyadic Video Dataset for Online Multimodal Conversational Response Generation** |
|
> *Authors: Luo, Cheng and Wang, Jianghui and Li, Bing and Song, Siyang and Ghanem, Bernard* |
|
|
|
[Github](https://github.com/awakening-ai/OmniResponse) |
|
[Project](https://omniresponse.github.io/) |
|
|
|
|
|
## Features |
|
|
|
- **696** temporally synchronized dyadic video pairs (over **14 hours** total). |
|
- **High-resolution** (1024×1024) frontal‑face streams for both speaker and listener. |
|
- **Separate audio channels** for fine‑grained verbal and nonverbal analysis. |
|
- **Word‑level textual annotations** for both participants. |
|
- **Longer clips** (average **73.39 s**) than REACT2024 (30 s) and Vico (9 s), capturing richer conversational exchanges. |
|
- **Diverse topics**: professional discussions, emotionally driven interactions, educational settings, interdisciplinary expert talks. |
|
- **Balanced splits**: training, validation, and test sets with equal distributions of topics, speaker identities, and recording conditions. |
|
|
|
## Data Fields |
|
|
|
Each example in the dataset is a dictionary with the following fields: |
|
|
|
- `video/speaker`: Path to the speaker’s video stream (1024×1024, frontal view). |
|
- `video/listener`: Path to the listener’s video stream (1024×1024, frontal view). |
|
- `audio_speaker`: Path to the speaker’s separated audio channel. |
|
- `audio/listener`: Path to the listener’s separated audio channel. |
|
- `transcript/speaker`: Word‑level transcription for the speaker (timestamps included). |
|
- `transcript/listener`: Word‑level transcription for the listener (timestamps included). |
|
- `vector/speaker`: Path to the speaker’s facial attributes. |
|
- `vector/listener`: Path to the listener’s facial attributes. |
|
|
|
## Dataset Splits |
|
|
|
We follow a standard **6:2:2** split ratio, ensuring balanced distributions of topics, identities, and recording conditions: |
|
|
|
| Split | # Video Pairs | Proportion (%) | |
|
|------------|---------------|----------------| |
|
| **Train** | 417 | 59.9 | |
|
| **Valid** | 139 | 20.0 | |
|
| **Test** | 140 | 20.1 | |
|
| **Total** | 696 | 100.0 | |
|
|
|
|
|
|
|
## Visualization |
|
|
|
You can visualize word‑cloud statistics, clip‑duration distributions, and topic breakdowns using standard Python plotting tools. |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{luo2025omniresponse, |
|
title={OmniResponse: Online Multimodal Conversational Response Generation in Dyadic Interactions}, |
|
author={Luo, Cheng and Wang, Jianghui and Li, Bing and Song, Siyang and Ghanem, Bernard}, |
|
journal={arXiv preprint arXiv:2505.21724}, |
|
year={2025} |
|
}} |
|
``` |
|
|
|
## License |
|
|
|
This dataset is released under the **CC BY-NC 4.0** license. |
|
|
|
|
|
|