Datasets:
File size: 4,106 Bytes
4a5e053 97737e5 4a5e053 f476ec8 97737e5 0dd289c 4a5e053 f476ec8 97737e5 0dd289c 18791b0 4a5e053 92a74cd 370c7e7 92a74cd 1c1f110 92a74cd 1c1f110 92a74cd 1c1f110 92a74cd 1c1f110 92a74cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
dataset_info:
- config_name: default
features:
- name: utterance
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 406785
num_examples: 8954
- name: test
num_bytes: 49545
num_examples: 1076
download_size: 199496
dataset_size: 456330
- config_name: intents
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regexp_full_match
sequence: 'null'
- name: regexp_partial_match
sequence: 'null'
- name: description
dtype: 'null'
splits:
- name: intents
num_bytes: 2422
num_examples: 64
download_size: 4037
dataset_size: 2422
- config_name: intentsqwen3-32b
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regex_full_match
sequence: 'null'
- name: regex_partial_match
sequence: 'null'
- name: description
dtype: string
splits:
- name: intents
num_bytes: 6360
num_examples: 64
download_size: 6559
dataset_size: 6360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: intents
data_files:
- split: intents
path: intents/intents-*
- config_name: intentsqwen3-32b
data_files:
- split: intents
path: intentsqwen3-32b/intents-*
task_categories:
- text-classification
language:
- en
---
# hwu64
This is a text classification dataset. It is intended for machine learning research and experimentation.
This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).
## Usage
It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
```python
from autointent import Dataset
hwu64 = Dataset.from_hub("AutoIntent/hwu64")
```
## Source
This dataset is taken from original work's github repository `jianguoz/Few-Shot-Intent-Detection` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
```python
# define utils
import requests
from autointent import Dataset
def load_text_from_url(github_file: str):
return requests.get(github_file).text
def convert_hwu64(hwu_utterances, hwu_labels):
intent_names = sorted(set(hwu_labels))
name_to_id = dict(zip(intent_names, range(len(intent_names)), strict=False))
n_classes = len(intent_names)
assert len(hwu_utterances) == len(hwu_labels)
classwise_utterance_records = [[] for _ in range(n_classes)]
intents = [
{
"id": i,
"name": name,
}
for i, name in enumerate(intent_names)
]
for txt, name in zip(hwu_utterances, hwu_labels, strict=False):
intent_id = name_to_id[name]
target_list = classwise_utterance_records[intent_id]
target_list.append({"utterance": txt, "label": intent_id})
utterances = [rec for lst in classwise_utterance_records for rec in lst]
return {"intents": intents, split: utterances}
# load
file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/train/label"
labels = load_text_from_url(file_url).split("\n")[:-1]
file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/train/seq.in"
utterances = load_text_from_url(file_url).split("\n")[:-1]
# convert
hwu64_train = convert_hwu64(utterances, labels, "train")
file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/test/label"
labels = load_text_from_url(file_url).split("\n")[:-1]
file_url = "https://raw.githubusercontent.com/jianguoz/Few-Shot-Intent-Detection/refs/heads/main/Datasets/HWU64/test/seq.in"
utterances = load_text_from_url(file_url).split("\n")[:-1]
# convert
hwu64_test = convert_hwu64(utterances, labels, "test")
hwu64_train["test"] = hwu64_test["test"]
dataset = Dataset.from_dict(hwu64_train)
``` |