GenAIDevTOProd's picture
Update README.md
349e4e2 verified
metadata
dataset_info:
  features:
    - name: dialog_id
      dtype: string
    - name: turns
      list:
        - name: bigram_overlap_prev
          dtype: float64
        - name: context_embedding
          list: float64
        - name: intent_label
          dtype: string
        - name: is_user
          dtype: int64
        - name: length_bucket
          dtype: string
        - name: nb_response_candidates
          list: string
        - name: readability
          dtype: float64
        - name: readability_score
          dtype: float64
        - name: role_embedding
          list: int64
        - name: sentiment_polarity
          dtype: float64
        - name: speaker
          dtype: string
        - name: text
          dtype: string
  splits:
    - name: train
      num_bytes: 515339977
      num_examples: 13215
  download_size: 458215847
  dataset_size: 515339977
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Taskmaster-1 Enriched Dialog Dataset (Combined)

Overview

This dataset is a combined, enriched version of the self_dialog and woz_dialog splits from the Taskmaster-1 dataset. It consists of multi-turn, human-human and human-simulated conversations with systematic enhancements for machine learning workflows—especially dialog modeling, generation, and fine-grained evaluation.

All conversations are structured in a JSON format with consistent schema and include added semantic, linguistic, and behavioral annotations.

Enrichments Included

  1. Role Embedding

Each turn includes a binary role embedding:

[1, 0] for USER

[0, 1] for ASSISTANT

This makes it easier for sequence models to learn speaker turns without relying on string labels.

Use case: Improves model performance in transformer-based dialog agents by allowing role-aware generation and classification.

  1. Response Candidates

Each user turn is enriched with nb_response_candidates — 2 to 4 plausible assistant responses sampled from the dataset. These are not ground truth but plausible continuations.

Use case: Ideal for retrieval-based dialog training or negative sampling in response ranking tasks.

  1. Readability Score

Computed using Flesch-Kincaid metrics and other NLP readability formulas. Stored as readability (0–100 scale, higher = easier).

Use case: Enables analysis of language complexity and training adaptive LLMs for education, accessibility, or voice interfaces.

  1. Readability Grade Score

Stored as readability_score on a U.S. grade level (lower = easier to read). Especially relevant for UX tuning.

Use case: Allows controlling reading level in generation tasks or selecting user-appropriate training samples.

  1. Context Embedding

Each turn is augmented with a context_embedding vector (384-dim, Sentence-BERT). Represents the semantic context of the turn.

Use case: Enables plug-and-play use with FAISS-based semantic search, response reranking, and memory-augmented generation.

  1. Speaker Role Flags

An is_user flag is included for each turn (1 = user, 0 = assistant).

Use case: Simplifies filtering, evaluation, or role-specific metric computation.

  1. Utterance Length Bucketing

Each turn is labeled as:

short (<= 5 tokens)

medium (6–15 tokens)

long (> 15 tokens)

Use case: Enables sampling, curriculum learning, or model analysis across turn complexity.

  1. Bigram Overlap with Previous Turn

Computed as bigram_overlap_prev (float between 0 and 1). Measures lexical repetition with the preceding utterance.

Use case: Useful for:

Dialogue coherence metrics

Detecting stagnation or repetition in generated responses

Analyzing repair-based utterances

  1. Sentiment Polarity

Computed using a sentiment analyzer. Stored as sentiment_polarity:

Ranges from –1 (strongly negative) to +1 (strongly positive)

Use case: Enables emotion-aware generation, tone control, or training sentiment-conditioned agents.

  1. Format Summary

Each conversation has:

dialog_id: Unique identifier

turns: List of enriched utterances

Each turn includes:

{ "speaker": "USER", "text": "I’d like to book a table for 2", "role_embedding": [1, 0], "intent_label": "request", "nb_response_candidates": [...], "readability_score": 4.5, "context_embedding": [...], "readability": 85.6, "is_user": 1, "length_bucket": "medium", "bigram_overlap_prev": 0.2, "sentiment_polarity": 0.1 }

Suggested Use Cases

Fine-tuning LLMs for goal-oriented dialog

Training dialog state trackers and response rankers

Evaluating model outputs with context-aware metrics

Curriculum learning based on length or readability

Emotion- and intent-conditioned dialog modeling

Semantic retrieval and reranking systems

Citation

@inproceedings{48484, title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, year = {2019} }

Taskmaster-1: Towards a Realistic Goal-Oriented Dialogue Dataset (Google-Research-Datasets)

Original base dataset: @patil-suraj (Original contributor)

Enrichments and combined version by: GenAIDevTOProd (Adithya)

License: Same as Taskmaster-1 (if public domain or open license)