File size: 5,228 Bytes
ceed6e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
349e4e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
---
dataset_info:
  features:
  - name: dialog_id
    dtype: string
  - name: turns
    list:
    - name: bigram_overlap_prev
      dtype: float64
    - name: context_embedding
      list: float64
    - name: intent_label
      dtype: string
    - name: is_user
      dtype: int64
    - name: length_bucket
      dtype: string
    - name: nb_response_candidates
      list: string
    - name: readability
      dtype: float64
    - name: readability_score
      dtype: float64
    - name: role_embedding
      list: int64
    - name: sentiment_polarity
      dtype: float64
    - name: speaker
      dtype: string
    - name: text
      dtype: string
  splits:
  - name: train
    num_bytes: 515339977
    num_examples: 13215
  download_size: 458215847
  dataset_size: 515339977
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

## Taskmaster-1 Enriched Dialog Dataset (Combined)
## Overview

This dataset is a combined, enriched version of the self_dialog and woz_dialog splits from the Taskmaster-1 dataset. It consists of multi-turn, human-human and human-simulated conversations with systematic enhancements for machine learning workflows—especially dialog modeling, generation, and fine-grained evaluation.

All conversations are structured in a JSON format with consistent schema and include added semantic, linguistic, and behavioral annotations.

## Enrichments Included
1. Role Embedding
   
Each turn includes a binary role embedding:

[1, 0] for USER

[0, 1] for ASSISTANT

This makes it easier for sequence models to learn speaker turns without relying on string labels.

Use case: Improves model performance in transformer-based dialog agents by allowing role-aware generation and classification.


2. Response Candidates
   
Each user turn is enriched with nb_response_candidates — 2 to 4 plausible assistant responses sampled from the dataset. These are not ground truth but plausible continuations.

Use case: Ideal for retrieval-based dialog training or negative sampling in response ranking tasks.

3. Readability Score
   
Computed using Flesch-Kincaid metrics and other NLP readability formulas. Stored as readability (0–100 scale, higher = easier).

Use case: Enables analysis of language complexity and training adaptive LLMs for education, accessibility, or voice interfaces.

4. Readability Grade Score
   
Stored as readability_score on a U.S. grade level (lower = easier to read). Especially relevant for UX tuning.

Use case: Allows controlling reading level in generation tasks or selecting user-appropriate training samples.

5. Context Embedding
   
Each turn is augmented with a context_embedding vector (384-dim, Sentence-BERT). Represents the semantic context of the turn.

Use case: Enables plug-and-play use with FAISS-based semantic search, response reranking, and memory-augmented generation.

6. Speaker Role Flags
   
An is_user flag is included for each turn (1 = user, 0 = assistant).

Use case: Simplifies filtering, evaluation, or role-specific metric computation.

7. Utterance Length Bucketing
   
Each turn is labeled as:

short (<= 5 tokens)

medium (6–15 tokens)

long (> 15 tokens)

Use case: Enables sampling, curriculum learning, or model analysis across turn complexity.

8. Bigram Overlap with Previous Turn
   
Computed as bigram_overlap_prev (float between 0 and 1). Measures lexical repetition with the preceding utterance.

Use case: Useful for:

Dialogue coherence metrics

Detecting stagnation or repetition in generated responses

Analyzing repair-based utterances

9. Sentiment Polarity
    
Computed using a sentiment analyzer. Stored as sentiment_polarity:

Ranges from –1 (strongly negative) to +1 (strongly positive)

Use case: Enables emotion-aware generation, tone control, or training sentiment-conditioned agents.

10. Format Summary
    
Each conversation has:

dialog_id: Unique identifier

turns: List of enriched utterances

Each turn includes:

{ "speaker": "USER", "text": "I’d like to book a table for 2", "role_embedding": [1, 0], "intent_label": "request", "nb_response_candidates": [...], "readability_score": 4.5, "context_embedding": [...], "readability": 85.6, "is_user": 1, "length_bucket": "medium", "bigram_overlap_prev": 0.2, "sentiment_polarity": 0.1 }

## Suggested Use Cases

Fine-tuning LLMs for goal-oriented dialog

Training dialog state trackers and response rankers

Evaluating model outputs with context-aware metrics

Curriculum learning based on length or readability

Emotion- and intent-conditioned dialog modeling

Semantic retrieval and reranking systems

## Citation

@inproceedings{48484,
title	= {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset},
author	= {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
year	= {2019}
}

## Taskmaster-1: Towards a Realistic Goal-Oriented Dialogue Dataset (Google-Research-Datasets)

## Original base dataset: @patil-suraj (Original contributor)

## Enrichments and combined version by: GenAIDevTOProd (Adithya)

## License: Same as Taskmaster-1 (if public domain or open license)