Datasets:
metadata
dataset_info:
- config_name: default
features:
- name: utterance
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 763742
num_examples: 13084
- name: test
num_bytes: 83070
num_examples: 1400
download_size: 409335
dataset_size: 846812
- config_name: intents
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regexp_full_match
sequence: 'null'
- name: regexp_partial_match
sequence: 'null'
- name: description
dtype: 'null'
splits:
- name: intents
num_bytes: 260
num_examples: 7
download_size: 3112
dataset_size: 260
- config_name: intentsqwen3-32b
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regex_full_match
sequence: 'null'
- name: regex_partial_match
sequence: 'null'
- name: description
dtype: string
splits:
- name: intents
num_bytes: 719
num_examples: 7
download_size: 3649
dataset_size: 719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: intents
data_files:
- split: intents
path: intents/intents-*
- config_name: intentsqwen3-32b
data_files:
- split: intents
path: intentsqwen3-32b/intents-*
task_categories:
- text-classification
language:
- en
snips
This is a text classification dataset. It is intended for machine learning research and experimentation.
This dataset is obtained via formatting another publicly available data to be compatible with our AutoIntent Library.
Usage
It is intended to be used with our AutoIntent Library:
from autointent import Dataset
snips = Dataset.from_hub("AutoIntent/snips")
Source
This dataset is taken from benayas/snips
and formatted with our AutoIntent Library:
"""Convert snips dataset to autointent internal format and scheme.""" # noqa: INP001
from datasets import Dataset as HFDataset
from datasets import load_dataset
from autointent import Dataset
from autointent.schemas import Intent, Sample
def _extract_intents_data(split: HFDataset) -> tuple[dict[str, int], list[Intent]]:
intent_names = sorted(split.unique("category"))
name_to_id = dict(zip(intent_names, range(len(intent_names)), strict=False))
return name_to_id, [Intent(id=i, name=name) for i, name in enumerate(intent_names)]
def convert_snips(split: HFDataset, name_to_id: dict[str, int]) -> list[Sample]:
"""Convert one split into desired format."""
n_classes = len(name_to_id)
classwise_samples = [[] for _ in range(n_classes)]
for batch in split.iter(batch_size=16, drop_last_batch=False):
for txt, name in zip(batch["text"], batch["category"], strict=False):
intent_id = name_to_id[name]
target_list = classwise_samples[intent_id]
target_list.append({"utterance": txt, "label": intent_id})
return [Sample(**sample) for samples_from_one_class in classwise_samples for sample in samples_from_one_class]
if __name__ == "__main__":
snips = load_dataset("benayas/snips")
name_to_id, intents_data = _extract_intents_data(snips["train"])
train_samples = convert_snips(snips["train"], name_to_id)
test_samples = convert_snips(snips["test"], name_to_id)
dataset = Dataset.from_dict({"train": train_samples, "test": test_samples, "intents": intents_data})