Datasets:

Modalities:
Audio
Video
ArXiv:
Libraries:
Datasets
License:
File size: 3,168 Bytes
8f917b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a967412
 
 
8f917b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a967412
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: cc-by-nc-4.0
---

# ResponseNet

**ResponseNet** is a large-scale dyadic video dataset designed for **Online Multimodal Conversational Response Generation (OMCRG)**. It fills the gap left by existing datasets by providing high-resolution, split-screen recordings of both speaker and listener, separate audio channels, and word‑level textual annotations for both participants.

## Paper

If you use this dataset, please cite:

> **ResponseNet: A High‑Resolution Dyadic Video Dataset for Online Multimodal Conversational Response Generation**  
> *Authors: Luo, Cheng and Wang, Jianghui and Li, Bing and Song, Siyang and Ghanem, Bernard*  

[Github](https://github.com/awakening-ai/OmniResponse)
[Project](https://omniresponse.github.io/)


## Features

- **696** temporally synchronized dyadic video pairs (over **14 hours** total).
- **High-resolution** (1024×1024) frontal‑face streams for both speaker and listener.
- **Separate audio channels** for fine‑grained verbal and nonverbal analysis.
- **Word‑level textual annotations** for both participants.
- **Longer clips** (average **73.39 s**) than REACT2024 (30 s) and Vico (9 s), capturing richer conversational exchanges.
- **Diverse topics**: professional discussions, emotionally driven interactions, educational settings, interdisciplinary expert talks.
- **Balanced splits**: training, validation, and test sets with equal distributions of topics, speaker identities, and recording conditions.

## Data Fields

Each example in the dataset is a dictionary with the following fields:

- `video/speaker`: Path to the speaker’s video stream (1024×1024, frontal view).
- `video/listener`: Path to the listener’s video stream (1024×1024, frontal view).
- `audio_speaker`: Path to the speaker’s separated audio channel.
- `audio/listener`: Path to the listener’s separated audio channel.
- `transcript/speaker`: Word‑level transcription for the speaker (timestamps included).
- `transcript/listener`: Word‑level transcription for the listener (timestamps included).
- `vector/speaker`: Path to the speaker’s facial attributes.
- `vector/listener`: Path to the listener’s facial attributes.

## Dataset Splits

We follow a standard **6:2:2** split ratio, ensuring balanced distributions of topics, identities, and recording conditions:

| Split      | # Video Pairs | Proportion (%) |
|------------|---------------|----------------|
| **Train**  | 417           | 59.9           |
| **Valid**  | 139           | 20.0           |
| **Test**   | 140           | 20.1           |
| **Total**  | 696           | 100.0          |



## Visualization

You can visualize word‑cloud statistics, clip‑duration distributions, and topic breakdowns using standard Python plotting tools.

## Citation

```bibtex
@article{luo2025omniresponse,
  title={OmniResponse: Online Multimodal Conversational Response Generation in Dyadic Interactions},
  author={Luo, Cheng and Wang, Jianghui and Li, Bing and Song, Siyang and Ghanem, Bernard},
  journal={arXiv preprint arXiv:2505.21724},
  year={2025}
}}
```

## License

This dataset is released under the **CC BY-NC 4.0** license.