Datasets:

Modalities:
Audio
Video
ArXiv:
Libraries:
Datasets
License:
awakening-ai commited on
Commit
8f917b0
·
verified ·
1 Parent(s): cbecf98

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ````markdown
2
+ ---
3
+ license: cc-by-nc-4.0
4
+ ---
5
+
6
+ # ResponseNet
7
+
8
+ **ResponseNet** is a large-scale dyadic video dataset designed for **Online Multimodal Conversational Response Generation (OMCRG)**. It fills the gap left by existing datasets by providing high-resolution, split-screen recordings of both speaker and listener, separate audio channels, and word‑level textual annotations for both participants.
9
+
10
+ ## Paper
11
+
12
+ If you use this dataset, please cite:
13
+
14
+ > **ResponseNet: A High‑Resolution Dyadic Video Dataset for Online Multimodal Conversational Response Generation**
15
+ > *Authors: Luo, Cheng and Wang, Jianghui and Li, Bing and Song, Siyang and Ghanem, Bernard*
16
+
17
+
18
+ ## Features
19
+
20
+ - **696** temporally synchronized dyadic video pairs (over **14 hours** total).
21
+ - **High-resolution** (1024×1024) frontal‑face streams for both speaker and listener.
22
+ - **Separate audio channels** for fine‑grained verbal and nonverbal analysis.
23
+ - **Word‑level textual annotations** for both participants.
24
+ - **Longer clips** (average **73.39 s**) than REACT2024 (30 s) and Vico (9 s), capturing richer conversational exchanges.
25
+ - **Diverse topics**: professional discussions, emotionally driven interactions, educational settings, interdisciplinary expert talks.
26
+ - **Balanced splits**: training, validation, and test sets with equal distributions of topics, speaker identities, and recording conditions.
27
+
28
+ ## Data Fields
29
+
30
+ Each example in the dataset is a dictionary with the following fields:
31
+
32
+ - `video/speaker`: Path to the speaker’s video stream (1024×1024, frontal view).
33
+ - `video/listener`: Path to the listener’s video stream (1024×1024, frontal view).
34
+ - `audio_speaker`: Path to the speaker’s separated audio channel.
35
+ - `audio/listener`: Path to the listener’s separated audio channel.
36
+ - `transcript/speaker`: Word‑level transcription for the speaker (timestamps included).
37
+ - `transcript/listener`: Word‑level transcription for the listener (timestamps included).
38
+ - `vector/speaker`: Path to the speaker’s facial attributes.
39
+ - `vector/listener`: Path to the listener’s facial attributes.
40
+
41
+ ## Dataset Splits
42
+
43
+ We follow a standard **6:2:2** split ratio, ensuring balanced distributions of topics, identities, and recording conditions:
44
+
45
+ | Split | # Video Pairs | Proportion (%) |
46
+ |------------|---------------|----------------|
47
+ | **Train** | 417 | 59.9 |
48
+ | **Valid** | 139 | 20.0 |
49
+ | **Test** | 140 | 20.1 |
50
+ | **Total** | 696 | 100.0 |
51
+
52
+
53
+
54
+ ## Visualization
55
+
56
+ You can visualize word‑cloud statistics, clip‑duration distributions, and topic breakdowns using standard Python plotting tools.
57
+
58
+ ## Citation
59
+
60
+ ```bibtex
61
+ @article{luo2025omniresponse,
62
+ title={OmniResponse: Online Multimodal Conversational Response Generation in Dyadic Interactions},
63
+ author={Luo, Cheng and Wang, Jianghui and Li, Bing and Song, Siyang and Ghanem, Bernard},
64
+ journal={arXiv preprint arXiv:2505.21724},
65
+ year={2025}
66
+ }}
67
+ ```
68
+
69
+ ## License
70
+
71
+ This dataset is released under the **CC BY-NC 4.0** license.
72
+
73
+ ```
74
+ ```