sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7e5605e0d0eb1ea91fe1771e2e0ba29a96693674
|
# Dataset Card for "autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T00:57:40+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 622819971, "dataset_size": 2600840000}}
|
2023-09-08T00:58:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
38
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_pmlb_100000_clean2_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
269560fc05dfb25c2702f59075f564590dbd6fa6
|
# Dataset Card for "autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T01:06:02+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 832579062, "dataset_size": 2600840000}}
|
2023-09-08T01:06:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
37
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
3d4b1a58342259081f3f6f2c7fbcfb6a8275d529
|
# Dataset Card for "autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T01:06:56+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 706482624, "num_examples": 10000}, {"name": "validation", "num_bytes": 708636096, "num_examples": 10000}], "download_size": 54846762, "dataset_size": 1415118720}}
|
2023-09-08T01:07:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
36
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_pmlb_10000_letter_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
9a089eea001d0f11b6bdc31044e9f9a8ec7e9f1b
|
# Dataset Card for "some_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Bingsu/some_corpus
|
[
"region:us"
] |
2023-09-08T01:13:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11307919646, "num_examples": 12932421}], "download_size": 1089008611, "dataset_size": 11307919646}}
|
2023-09-08T01:23:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "some_corpus"
More Information needed
|
[
"# Dataset Card for \"some_corpus\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"some_corpus\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"some_corpus\"\n\nMore Information needed"
] |
b609381fc7cabd4d978a4b0d8e7d37043674c353
|
# Dataset Card
Dataset in [ImagenHub](arxiv.org/abs/2310.01596).
# Citation
Please kindly cite our paper if you use our code, data, models or results:
```
@article{ku2023imagenhub,
title={ImagenHub: Standardizing the evaluation of conditional image generation models},
author={Max Ku and Tianle Li and Kai Zhang and Yujie Lu and Xingyu Fu and Wenwen Zhuang and Wenhu Chen},
journal={arXiv preprint arXiv:2310.01596},
year={2023}
}
```
|
ImagenHub/Subject_Driven_Image_Editing
|
[
"arxiv:2310.01596",
"region:us"
] |
2023-09-08T01:24:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "eval", "path": "data/eval-*"}, {"split": "extra", "path": "data/extra-*"}]}], "dataset_info": {"features": [{"name": "uid", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "subject", "dtype": "string"}], "splits": [{"name": "eval", "num_bytes": 4414578.0, "num_examples": 154}, {"name": "extra", "num_bytes": 1779741.0, "num_examples": 66}], "download_size": 6179822, "dataset_size": 6194319.0}}
|
2023-11-27T09:26:54+00:00
|
[
"2310.01596"
] |
[] |
TAGS
#arxiv-2310.01596 #region-us
|
# Dataset Card
Dataset in ImagenHub.
Please kindly cite our paper if you use our code, data, models or results:
|
[
"# Dataset Card\n\n\nDataset in ImagenHub. \n\n\nPlease kindly cite our paper if you use our code, data, models or results:"
] |
[
"TAGS\n#arxiv-2310.01596 #region-us \n",
"# Dataset Card\n\n\nDataset in ImagenHub. \n\n\nPlease kindly cite our paper if you use our code, data, models or results:"
] |
[
15,
29
] |
[
"passage: TAGS\n#arxiv-2310.01596 #region-us \n# Dataset Card\n\n\nDataset in ImagenHub. \n\n\nPlease kindly cite our paper if you use our code, data, models or results:"
] |
5f43961abfde5115c39c5128e1f2359893549df8
|
# Chat-Instruct-Mixer Dataset
This dataset is focused on improving LLM logical reasoning skills and conversation skills. It is comprised of the following datasets:
| Dataset Name | Train Mixing Percentage/Samples | Test Mixing Percentage/Samples |
|--------------------------------------------------------------|--------------|------------------|
| [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) | 100% | 300 samples |
| [GAIR/lima](https://huggingface.co/datasets/GAIR/lima) | 100% | 518 samples |
| [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) | 100% minus the samples set aside for test split | 2500 samples |
| [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) | 10000 samples from GPT-4 split | 5000 samples |
| [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) | 10000 samples from GPT-4 split | 5000 samples |
| [stingning/ultrachat](https://huggingface.co/datasets/stingning/ultrachat) | 10000 samples | 5000 samples |
| [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2) | 10000 Samples while filtering out samples with `skip_prompt_formatting==True` | 5000 samples |
Code for Creating this dataset: [ToDo]()
|
smangrul/chat-instruct-mixer
|
[
"region:us"
] |
2023-09-08T02:03:01+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 169947792.7111158, "num_examples": 73302}, {"name": "test", "num_bytes": 48395025.62775446, "num_examples": 23318}], "download_size": 123606462, "dataset_size": 218342818.33887026}}
|
2023-09-08T04:44:19+00:00
|
[] |
[] |
TAGS
#region-us
|
Chat-Instruct-Mixer Dataset
===========================
This dataset is focused on improving LLM logical reasoning skills and conversation skills. It is comprised of the following datasets:
Dataset Name: timdettmers/openassistant-guanaco, Train Mixing Percentage/Samples: 100%, Test Mixing Percentage/Samples: 300 samples
Dataset Name: GAIR/lima, Train Mixing Percentage/Samples: 100%, Test Mixing Percentage/Samples: 518 samples
Dataset Name: garage-bAInd/Open-Platypus, Train Mixing Percentage/Samples: 100% minus the samples set aside for test split, Test Mixing Percentage/Samples: 2500 samples
Dataset Name: Open-Orca/OpenOrca, Train Mixing Percentage/Samples: 10000 samples from GPT-4 split, Test Mixing Percentage/Samples: 5000 samples
Dataset Name: ehartford/dolphin, Train Mixing Percentage/Samples: 10000 samples from GPT-4 split, Test Mixing Percentage/Samples: 5000 samples
Dataset Name: stingning/ultrachat, Train Mixing Percentage/Samples: 10000 samples, Test Mixing Percentage/Samples: 5000 samples
Dataset Name: jondurbin/airoboros-2.2, Train Mixing Percentage/Samples: 10000 Samples while filtering out samples with 'skip\_prompt\_formatting==True', Test Mixing Percentage/Samples: 5000 samples
Code for Creating this dataset: ToDo
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
b7d20221ca07784d98dd7c3a2eb8c0b20eca43ef
|
# Dataset Card for "autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T02:05:03+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 873878506, "dataset_size": 2600840000}}
|
2023-09-08T02:05:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
40
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_eye_movements_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
6e3e7ae81285d5722cf406e7c40f9a61ded55d4d
|
# Dataset Card for "bus_few4_40x"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_40x
|
[
"region:us"
] |
2023-09-08T02:13:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 546058, "num_examples": 2800}, {"name": "validation", "num_bytes": 6900, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 623576}}
|
2023-09-26T16:23:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_40x"
More Information needed
|
[
"# Dataset Card for \"bus_few4_40x\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_40x\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_40x\"\n\nMore Information needed"
] |
e3447d0ed9a10eea743070b89a119acbd56088c0
|
# Dataset Card for "bus_few4_40x_empty"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_40x_empty
|
[
"region:us"
] |
2023-09-08T02:13:41+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 485547, "num_examples": 2800}, {"name": "validation", "num_bytes": 6128, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 562293}}
|
2023-09-26T16:23:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_40x_empty"
More Information needed
|
[
"# Dataset Card for \"bus_few4_40x_empty\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_40x_empty\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_40x_empty\"\n\nMore Information needed"
] |
274305d3bb8ccc869df40bce7edca6853da6860a
|
# Dataset Card for "autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0
|
[
"region:us"
] |
2023-09-08T02:17:52+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2159600000, "num_examples": 100000}, {"name": "validation", "num_bytes": 215960000, "num_examples": 10000}], "download_size": 848115506, "dataset_size": 2375560000}}
|
2023-09-08T02:18:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0\"\n\nMore Information needed"
] |
[
6,
38
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_california_sgosdt_l256_dim8_d3_sd0\"\n\nMore Information needed"
] |
e6c09d9b373007c6655cf2742db6b814acd16365
|
# Dataset Card for "correct_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hantech/correct_dataset
|
[
"region:us"
] |
2023-09-08T02:30:23+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80541676, "num_examples": 626100}], "download_size": 11445024, "dataset_size": 80541676}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T06:06:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "correct_dataset"
More Information needed
|
[
"# Dataset Card for \"correct_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"correct_dataset\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"correct_dataset\"\n\nMore Information needed"
] |
00a85686a06a1b614c713cd11dd646b1f024c84a
|
# Dataset Card for "lima-unchained-v1-a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alpayariyak/lima-unchained-v1-a
|
[
"region:us"
] |
2023-09-08T03:25:29+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1767427, "num_examples": 780}], "download_size": 1047258, "dataset_size": 1767427}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T03:25:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "lima-unchained-v1-a"
More Information needed
|
[
"# Dataset Card for \"lima-unchained-v1-a\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"lima-unchained-v1-a\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"lima-unchained-v1-a\"\n\nMore Information needed"
] |
0bdf5212a63090d8d71ab184e725cf0765d32414
|
# Dataset Card for "dolly-no_context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zhengxuanzenwu/dolly-no_context
|
[
"region:us"
] |
2023-09-08T03:40:04+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8566403.9981347, "num_examples": 10544}], "download_size": 3228303, "dataset_size": 8566403.9981347}}
|
2023-09-08T04:54:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dolly-no_context"
More Information needed
|
[
"# Dataset Card for \"dolly-no_context\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dolly-no_context\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dolly-no_context\"\n\nMore Information needed"
] |
f32c9dcd9b5a4c420a1fb4a5abeba6920d3a8e7c
|
# Dataset Card for "autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T03:55:18+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 667958200, "dataset_size": 2600840000}}
|
2023-09-08T03:55:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
45
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
0660b1ffef9dec461bfa335a42e82a5761e170c5
|
# Dataset Card for "bus_few4_40x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_40x_pvi
|
[
"region:us"
] |
2023-09-08T04:08:39+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 345681, "num_examples": 1400}, {"name": "validation", "num_bytes": 6900, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 45026, "dataset_size": 423199}}
|
2023-09-26T18:53:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_40x_pvi"
More Information needed
|
[
"# Dataset Card for \"bus_few4_40x_pvi\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_40x_pvi\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_40x_pvi\"\n\nMore Information needed"
] |
f3eba82f8c9cb24a0180f2c4b4a10c9160d4d035
|
# Dataset Card for "low_quality_call_voice_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
INo0121/low_quality_call_voice_preprocessed
|
[
"region:us"
] |
2023-09-08T04:10:37+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 64088254376, "num_examples": 66720}, {"name": "test", "num_bytes": 7476961712, "num_examples": 7784}, {"name": "valid", "num_bytes": 7476975416, "num_examples": 7784}], "download_size": 521083513, "dataset_size": 79042191504}}
|
2023-09-21T12:25:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "low_quality_call_voice_preprocessed"
More Information needed
|
[
"# Dataset Card for \"low_quality_call_voice_preprocessed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"low_quality_call_voice_preprocessed\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"low_quality_call_voice_preprocessed\"\n\nMore Information needed"
] |
d5ced32e95b1069a4a0bdfb2eefe9383826b6376
|
# Dataset Card for "SpeakerEmbedding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KyS/SpeakerEmbedding1
|
[
"region:us"
] |
2023-09-08T04:36:22+00:00
|
{"dataset_info": {"features": [{"name": "Speakers", "dtype": "string"}, {"name": "Audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 9702090, "num_examples": 2}], "download_size": 2360485, "dataset_size": 9702090}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-12-07T16:25:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SpeakerEmbedding"
More Information needed
|
[
"# Dataset Card for \"SpeakerEmbedding\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SpeakerEmbedding\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SpeakerEmbedding\"\n\nMore Information needed"
] |
905e192c61ac95ca95f2762230e3bf3ee3c0d1a0
|
# Dataset Card for "yjching"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yjching/tokenized_ts_tracks
|
[
"region:us"
] |
2023-09-08T04:47:49+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 114719, "num_examples": 8}], "download_size": 46092, "dataset_size": 114719}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T04:47:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yjching"
More Information needed
|
[
"# Dataset Card for \"yjching\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yjching\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yjching\"\n\nMore Information needed"
] |
9a552d7d979e9ec2ae18a8fc846cadfd728cf671
|
# Dataset Card for "TinyStories2-ascii-bpe-32k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cyrilzhang/TinyStories2-ascii-bpe-32k
|
[
"region:us"
] |
2023-09-08T04:59:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 2116666000, "num_examples": 516260}, {"name": "validation", "num_bytes": 21369200, "num_examples": 5212}], "download_size": 881246333, "dataset_size": 2138035200}}
|
2023-09-08T05:00:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "TinyStories2-ascii-bpe-32k"
More Information needed
|
[
"# Dataset Card for \"TinyStories2-ascii-bpe-32k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"TinyStories2-ascii-bpe-32k\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"TinyStories2-ascii-bpe-32k\"\n\nMore Information needed"
] |
c9bbee9e2d0716a48c2dda65c9c981fe82cd1da4
|
# Dataset Card for "salesforce-3-formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pssubitha/salesforce-3-formatted
|
[
"region:us"
] |
2023-09-08T05:14:47+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13717, "num_examples": 34}], "download_size": 10282, "dataset_size": 13717}}
|
2023-09-08T05:14:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "salesforce-3-formatted"
More Information needed
|
[
"# Dataset Card for \"salesforce-3-formatted\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"salesforce-3-formatted\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"salesforce-3-formatted\"\n\nMore Information needed"
] |
f79aa3801411aef5b8913f26f9bf8e78e1b86ea1
|
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T05:27:06+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 1008851704, "dataset_size": 2600840000}}
|
2023-09-08T05:27:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
45
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_pmlb_100000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
30f951ccc2f8a1e7ac7daa00788e85e86dc44539
|
{
"経費申請ルール": {
"経費の基準": [
"仕事に使うもの",
"スキルアップに使うもの"
],
"使うもの": [
"[経費申請書スプレッドシート](https://docs.google.com/spreadsheets/d/1cshM0OaMJ2YKSLZiFqro_BRk6vgCMaGOu60Nwq-rUH0/edit?usp=sharing)",
"[交通費申請書スプレッドシート](https://docs.google.com/spreadsheets/d/1DVTcvXI1SYSJ6MFpC9d-xXilBLeTkyBnEKtn9vublyw/edit?usp=sharing)"
],
"申請ルール": {
"事後": {
"タイミング": [
"毎月15日"
],
"対象": [
"交通費",
"本・PC周辺機器(5万/人・月まで)",
"郵送費"
],
"申請ルール(交通費 定期)": [
"起点:入社時・引っ越し時",
"区間から1ヶ月定期の額を調べる",
"区間・1ヶ月定期の額をRecordWorkに記入"
],
"申請ルール(交通費)": [
"RecordWorkにてテンプレートからワークを作成",
[
"事務 交通費申請(定期以外){人名} 〜y/m/15 "
],
"申請者が、各人の交通費申請書スプレッドシートをテンプレートから作成",
[
"経費申請書スプレッドシートURLをRecordWorkワークに貼る"
]
],
"申請ルール(その他)": [
"申請者が、RecordWorkにてテンプレートからワークを作成",
[
"事務 経費申請(その他){人名} 〜y/m/15 "
],
"申請者が、各人の経費申請書スプレッドシートをテンプレートから作成",
[
"経費申請書スプレッドシートURLをRecordWorkワークに貼る "
],
"領収書の取り扱い",
[
"原本が紙のもの(ex. レシート)",
[
"〒168-0082 東京都杉並区久我山4丁目13−6までレターパックで郵送"
],
"原本がPDFのもの",
[
"https://drive.google.com/drive/u/0/folders/1aT9_WT6lq3lYXyqRQUFdme_Q9hXkj7jT\n内の自分のフォルダに、yyyy_mm月16日-mm月15日_PDF(例:2020_04月16日-5月15日_PDF)フォルダを作り、その中にPDFを配置",
"PDFの命名は{ID}.pdf(例: 0001.pdf)とする"
]
]
],
"支払いルール": [
"15日までの経費を25日に支払"
]
},
"事前": [
"5万/人・月を超える場合"
]
},
"フロー": [
"RecordWorkにリンク・金額・商品名・何に使うか記載",
"各人のシートに記載",
"15日に事後の経費と同じ申請ルールで申請"
],
"申請後処理(交通費・その他経費共通)": {
"経費確認・確定売上記載": [
"シートとレシートを突き合わせ(赤瀬) ",
"シートファイル名の末尾に確認済みと記載(赤瀬) ",
"確定売上に記載、水色(請求済み)で塗る (赤瀬) "
],
"入金": [
"入金",
"経費申請シートのファイル名末尾を「入金済」にする ",
"確定売上の該当レコードを緑色(入金済)で塗る "
]
}
}
}
|
yokinakaoto/keihi
|
[
"region:us"
] |
2023-09-08T05:32:50+00:00
|
{}
|
2023-09-08T05:33:24+00:00
|
[] |
[] |
TAGS
#region-us
|
{
"経費申請ルール": {
"経費の基準": [
"仕事に使うもの",
"スキルアップに使うもの"
],
"使うもの":
"[経費申請書スプレッドシート",
"交通費申請書スプレッドシート"
],
"申請ルール": {
"事後": {
"タイミング": [
"毎月15日"
],
"対象": [
"交通費",
"本・PC周辺機器(5万/人・月まで)",
"郵送費"
],
"申請ルール(交通費 定期)": [
"起点:入社時・引っ越し時",
"区間から1ヶ月定期の額を調べる",
"区間・1ヶ月定期の額をRecordWorkに記入"
],
"申請ルール(交通費)": [
"RecordWorkにてテンプレートからワークを作成",
[
"事務 交通費申請(定期以外){人名} 〜y/m/15 "
],
"申請者が、各人の交通費申請書スプレッドシートをテンプレートから作成",
[
"経費申請書スプレッドシートURLをRecordWorkワークに貼る"
]
],
"申請ルール(その他)": [
"申請者が、RecordWorkにてテンプレートからワークを作成",
[
"事務 経費申請(その他){人名} 〜y/m/15 "
],
"申請者が、各人の経費申請書スプレッドシートをテンプレートから作成",
[
"経費申請書スプレッドシートURLをRecordWorkワークに貼る "
],
"領収書の取り扱い",
[
"原本が紙のもの(ex. レシート)",
[
"〒168-0082 東京都杉並区久我山4丁目13−6までレターパックで郵送"
],
"原本がPDFのもの",
[
"URL\n内の自分のフォルダに、yyyy_mm月16日-mm月15日_PDF(例:2020_04月16日-5月15日_PDF)フォルダを作り、その中にPDFを配置",
"PDFの命名は{ID}.pdf(例: URL)とする"
]
]
],
"支払いルール": [
"15日までの経費を25日に支払"
]
},
"事前": [
"5万/人・月を超える場合"
]
},
"フロー": [
"RecordWorkにリンク・金額・商品名・何に使うか記載",
"各人のシートに記載",
"15日に事後の経費と同じ申請ルールで申請"
],
"申請後処理(交通費・その他経費共通)": {
"経費確認・確定売上記載": [
"シートとレシートを突き合わせ(赤瀬) ",
"シートファイル名の末尾に確認済みと記載(赤瀬) ",
"確定売上に記載、水色(請求済み)で塗る (赤瀬) "
],
"入金": [
"入金",
"経費申請シートのファイル名末尾を「入金済」にする ",
"確定売上の該当レコードを緑色(入金済)で塗る "
]
}
}
}
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
7c050b6421b688ccf3103d3129e7d4037ff5a6d1
|
# Dataset Card for "EY_speed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Gummybear05/EY_speed
|
[
"region:us"
] |
2023-09-08T05:39:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sample_rate", "dtype": "int64"}]}, {"name": "text", "dtype": "string"}, {"name": "scriptId", "dtype": "int64"}, {"name": "fileNm", "dtype": "string"}, {"name": "recrdTime", "dtype": "float64"}, {"name": "recrdQuality", "dtype": "int64"}, {"name": "recrdDt", "dtype": "string"}, {"name": "scriptSetNo", "dtype": "string"}, {"name": "recrdEnvrn", "dtype": "string"}, {"name": "colctUnitCode", "dtype": "string"}, {"name": "cityCode", "dtype": "string"}, {"name": "recrdUnit", "dtype": "string"}, {"name": "convrsThema", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "recorderId", "dtype": "string"}, {"name": "age", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4865314660, "num_examples": 5400}], "download_size": 2492360968, "dataset_size": 4865314660}}
|
2023-09-08T05:51:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "EY_speed"
More Information needed
|
[
"# Dataset Card for \"EY_speed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"EY_speed\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"EY_speed\"\n\nMore Information needed"
] |
1c4efda7f4ebcd0882de1d3f92938d00f9a288a7
|
# Dataset Card for "bus_few4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4
|
[
"region:us"
] |
2023-09-08T05:42:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 11159, "num_examples": 60}, {"name": "validation", "num_bytes": 1913, "num_examples": 10}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 22954, "dataset_size": 83690}}
|
2023-09-08T07:01:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4"
More Information needed
|
[
"# Dataset Card for \"bus_few4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4\"\n\nMore Information needed"
] |
ad8b80a8588f0b08ad3c0cac5002e135b6449141
|
# Dataset Card for "bus_few4_empty"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_empty
|
[
"region:us"
] |
2023-09-08T05:43:04+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 9935, "num_examples": 60}, {"name": "validation", "num_bytes": 1684, "num_examples": 10}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 20471, "dataset_size": 82237}}
|
2023-09-08T07:01:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_empty"
More Information needed
|
[
"# Dataset Card for \"bus_few4_empty\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_empty\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_empty\"\n\nMore Information needed"
] |
e9263dcb38a87869c09b5473e444a73edbb4f043
|
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T05:57:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 1014035692, "dataset_size": 2600840000}}
|
2023-09-08T05:58:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
46
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_pmlb_100000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
17a0174b1afdb78d05fc82274183577886348ce0
|
# Dataset Card for "EY_freq_speed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Gummybear05/EY_freq_speed
|
[
"region:us"
] |
2023-09-08T06:07:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sample_rate", "dtype": "int64"}]}, {"name": "text", "dtype": "string"}, {"name": "scriptId", "dtype": "int64"}, {"name": "fileNm", "dtype": "string"}, {"name": "recrdTime", "dtype": "float64"}, {"name": "recrdQuality", "dtype": "int64"}, {"name": "recrdDt", "dtype": "string"}, {"name": "scriptSetNo", "dtype": "string"}, {"name": "recrdEnvrn", "dtype": "string"}, {"name": "colctUnitCode", "dtype": "string"}, {"name": "cityCode", "dtype": "string"}, {"name": "recrdUnit", "dtype": "string"}, {"name": "convrsThema", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "recorderId", "dtype": "string"}, {"name": "age", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4865314660, "num_examples": 5400}], "download_size": 2492988610, "dataset_size": 4865314660}}
|
2023-09-08T09:29:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "EY_freq_speed"
More Information needed
|
[
"# Dataset Card for \"EY_freq_speed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"EY_freq_speed\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"EY_freq_speed\"\n\nMore Information needed"
] |
80255e6e161985ed10b184444136b8586579234c
|
# A Multi-Domain Corpus for Measurement Extraction (Seq2Seq variant)
A detailed description of corpus creation can be found [here](https://aclanthology.org/2023.bionlp-1.1/).
This dataset contains the training and validation and test data for each of the three datasets `measeval`, `bm`, and `msp`. The `measeval`, and `msp` datasets were adapted from the [MeasEval (Harper et al., 2021)](https://github.com/harperco/MeasEval) and the [Material Synthesis Procedual (Mysore et al., 2019)](https://github.com/olivettigroup/annotated-materials-syntheses) corpus respectively.
This repository aggregates extraction to paragraph-level for msp and measeval. Labels are given in json-format as preparation for seq2seq training.
# How to load
```python
from datasets import load_dataset
# Only train, all domains
train_dataset = load_dataset("liy140/multidomain-measextract-corpus", "all", split="train")
# All measeval data
measeval_dataset = load_dataset("liy140/multidomain-measextract-corpus", "measeval", split=["train", "val", "test"])
```
# Create Seq2Seq samples
One standard instruction is given, such that such a prompt can be generated by merging text and extraction columns:
```
### Instruction
You are an expert at extracting quantity, units and their related context from text.
Given a paragraph below identify each quantity and its related unit and related context, i.e. the measured entity and measured property if they exist.
### Paragraph
The H/H+ transition in the MC09 model occurs near 1.4Rp. If we replace the gray approximation with the full solar spectrum in this model, the H/H+ transition moves higher to 2–3Rp. This is because photons with different energies penetrate to different depths in the atmosphere, extending the heating profile in altitude around the heating peak. This is why the temperature at the 30 nbar level in the C2 model is 3800 K and not 1000 K. In order to test the effect of higher temperatures in the lower thermosphere, we extended the MC09 model to p0 = 1 μbar (with T0 = 1300 K) and again used the full solar spectrum for heating and ionization. With these conditions, the H/H+ transition moves up to 3.4Rp, in agreement with the C2 model. We conclude that the unrealistic boundary conditions and the gray approximation adopted by Murray-Clay et al. (2009) and Guo (2011) lead to an underestimated overall density of H and an overestimated ion fraction. Thus their density profiles yield a H Lyman α transit depth of the order of 2–3% i.e., not significantly higher than the visible transit depth.
### Extractions
[
{
"docId": "S0019103513005058-3154",
"measured_entity": "Soluble sulfate",
"measured_property": null,
"quantity": "1.3 \u00b1 0.5 wt.%",
"unit": "wt.%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "soil",
"measured_property": "perchlorate (ClO4-)",
"quantity": "\u223c0.5 wt.%",
"unit": "wt.%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "perchlorate-sensitive electrode",
"measured_property": "sensitive to nitrate",
"quantity": "1000 times",
"unit": "times"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "Viking 1 and Viking 2 landing sites",
"measured_property": "perchlorate",
"quantity": "\u2a7d1.6%",
"unit": "%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "martian meteorite EETA79001",
"measured_property": "Native perchlorate",
"quantity": "<1 ppm by mass",
"unit": "ppm by mass"
}
]
```
# Citation
```
@inproceedings{li-etal-2023-multi-source,
title = "Multi-Source (Pre-)Training for Cross-Domain Measurement, Unit and Context Extraction",
author = "Li, Yueling and
Martschat, Sebastian and
Ponzetto, Simone Paolo",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.1",
pages = "1--25",
abstract = "We present a cross-domain approach for automated measurement and context extraction based on pre-trained language models. We construct a multi-source, multi-domain corpus and train an end-to-end extraction pipeline. We then apply multi-source task-adaptive pre-training and fine-tuning to benchmark the cross-domain generalization capability of our model. Further, we conceptualize and apply a task-specific error analysis and derive insights for future work. Our results suggest that multi-source training leads to the best overall results, while single-source training yields the best results for the respective individual domain. While our setup is successful at extracting quantity values and units, more research is needed to improve the extraction of contextual entities. We make the cross-domain corpus used in this work available online.",
}
```
|
liy140/multidomain-measextract-corpus
|
[
"task_categories:token-classification",
"size_categories:n<1K",
"language:en",
"chemistry",
"biology",
"region:us"
] |
2023-09-08T06:08:47+00:00
|
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["token-classification"], "configs": [{"config_name": "measeval", "data_files": [{"split": "train", "path": "measeval_paragraph_level_no_spans_train.json"}, {"split": "val", "path": "measeval_paragraph_level_no_spans_val.json"}, {"split": "test", "path": "measeval_paragraph_level_no_spans_test.json"}]}, {"config_name": "bm", "data_files": [{"split": "train", "path": "bm_paragraph_level_no_spans_train.json"}, {"split": "val", "path": "bm_paragraph_level_no_spans_val.json"}, {"split": "test", "path": "bm_paragraph_level_no_spans_test.json"}]}, {"config_name": "msp", "data_files": [{"split": "train", "path": "msp_paragraph_level_no_spans_train.json"}, {"split": "val", "path": "msp_paragraph_level_no_spans_val.json"}, {"split": "test", "path": "msp_paragraph_level_no_spans_test.json"}]}, {"config_name": "all", "data_files": [{"split": "train", "path": ["measeval_paragraph_level_no_spans_train.json", "bm_paragraph_level_no_spans_train.json", "msp_paragraph_level_no_spans_train.json"]}, {"split": "val", "path": ["measeval_paragraph_level_no_spans_val.json", "bm_paragraph_level_no_spans_val.json", "msp_paragraph_level_no_spans_val.json"]}, {"split": "test", "path": ["measeval_paragraph_level_no_spans_test.json", "bm_paragraph_level_no_spans_test.json", "msp_paragraph_level_no_spans_test.json"]}]}], "tags": ["chemistry", "biology"]}
|
2023-09-12T07:09:43+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-token-classification #size_categories-n<1K #language-English #chemistry #biology #region-us
|
# A Multi-Domain Corpus for Measurement Extraction (Seq2Seq variant)
A detailed description of corpus creation can be found here.
This dataset contains the training and validation and test data for each of the three datasets 'measeval', 'bm', and 'msp'. The 'measeval', and 'msp' datasets were adapted from the MeasEval (Harper et al., 2021) and the Material Synthesis Procedual (Mysore et al., 2019) corpus respectively.
This repository aggregates extraction to paragraph-level for msp and measeval. Labels are given in json-format as preparation for seq2seq training.
# How to load
# Create Seq2Seq samples
One standard instruction is given, such that such a prompt can be generated by merging text and extraction columns:
|
[
"# A Multi-Domain Corpus for Measurement Extraction (Seq2Seq variant)\n\n\nA detailed description of corpus creation can be found here.\n\nThis dataset contains the training and validation and test data for each of the three datasets 'measeval', 'bm', and 'msp'. The 'measeval', and 'msp' datasets were adapted from the MeasEval (Harper et al., 2021) and the Material Synthesis Procedual (Mysore et al., 2019) corpus respectively.\n\nThis repository aggregates extraction to paragraph-level for msp and measeval. Labels are given in json-format as preparation for seq2seq training.",
"# How to load",
"# Create Seq2Seq samples\n\nOne standard instruction is given, such that such a prompt can be generated by merging text and extraction columns:"
] |
[
"TAGS\n#task_categories-token-classification #size_categories-n<1K #language-English #chemistry #biology #region-us \n",
"# A Multi-Domain Corpus for Measurement Extraction (Seq2Seq variant)\n\n\nA detailed description of corpus creation can be found here.\n\nThis dataset contains the training and validation and test data for each of the three datasets 'measeval', 'bm', and 'msp'. The 'measeval', and 'msp' datasets were adapted from the MeasEval (Harper et al., 2021) and the Material Synthesis Procedual (Mysore et al., 2019) corpus respectively.\n\nThis repository aggregates extraction to paragraph-level for msp and measeval. Labels are given in json-format as preparation for seq2seq training.",
"# How to load",
"# Create Seq2Seq samples\n\nOne standard instruction is given, such that such a prompt can be generated by merging text and extraction columns:"
] |
[
39,
165,
4,
36
] |
[
"passage: TAGS\n#task_categories-token-classification #size_categories-n<1K #language-English #chemistry #biology #region-us \n# A Multi-Domain Corpus for Measurement Extraction (Seq2Seq variant)\n\n\nA detailed description of corpus creation can be found here.\n\nThis dataset contains the training and validation and test data for each of the three datasets 'measeval', 'bm', and 'msp'. The 'measeval', and 'msp' datasets were adapted from the MeasEval (Harper et al., 2021) and the Material Synthesis Procedual (Mysore et al., 2019) corpus respectively.\n\nThis repository aggregates extraction to paragraph-level for msp and measeval. Labels are given in json-format as preparation for seq2seq training.# How to load# Create Seq2Seq samples\n\nOne standard instruction is given, such that such a prompt can be generated by merging text and extraction columns:"
] |
3146a446c27e7c8cec03f288138b5e86cc9e40a2
|
# Dataset Card for "OMR-forms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
saurabh1896/OMR-forms
|
[
"region:us"
] |
2023-09-08T06:15:42+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8632972.0, "num_examples": 14}, {"name": "test", "num_bytes": 1629831.0, "num_examples": 4}], "download_size": 7181972, "dataset_size": 10262803.0}}
|
2023-09-08T06:24:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "OMR-forms"
More Information needed
|
[
"# Dataset Card for \"OMR-forms\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"OMR-forms\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"OMR-forms\"\n\nMore Information needed"
] |
455403a1a31d3dc2540dff856a7489b5181df13a
|
# Dataset Card for "bus_few4_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FanChen0116/bus_few4_pvi
|
[
"region:us"
] |
2023-09-08T07:02:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 11159, "num_examples": 60}, {"name": "validation", "num_bytes": 1913, "num_examples": 10}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 83690}}
|
2023-09-08T07:13:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bus_few4_pvi"
More Information needed
|
[
"# Dataset Card for \"bus_few4_pvi\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bus_few4_pvi\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_pvi\"\n\nMore Information needed"
] |
b396b88da8da3256bea19a0deea74529e6348109
|
Here is a collective list of instruction dataset used for Neural Chat fine-tuning. The total number of instruction samples and tokens are about 1.5M and 5M respectively.
| Type | Language | Dataset | Number |
|--| ---- |--------|----|
| HC3 | en | [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) | 24K |
| dolly | en | [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | 15K |
| alpaca-zh | zh | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-zh-0.5m) | 500K |
| alpaca-en | en | [TigerResearch/tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-en-50k) | 50K |
| math | en | [tigerbot-gsm-8k-en](https://huggingface.co/datasets/TigerResearch/tigerbot-gsm-8k-en) | 8K |
| general | en | [tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-stackexchange-qa-en-0.5m) | 500K |
| OpenOrca | en | [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) | 400K (sampled) |
The collective dataset has been validated on multiple LLMs (such as MPT, LLama, Llama2) by the NeuralChat team (Kaokao Lv, Wenxin Zhang, Xuhui Ren, and Haihao Shen) from Intel/SATG/AIA/AIPT. Thanks to [Hello-SimpleAI](https://huggingface.co/Hello-SimpleAI), [databricks](https://huggingface.co/databricks), [TigerResearch/TigerBot](https://github.com/TigerResearch/TigerBot), [Open-Orca](https://huggingface.co/Open-Orca) for releasing the open-source instruction dataset.
|
Intel/neural-chat-dataset-v2
|
[
"license:apache-2.0",
"region:us"
] |
2023-09-08T07:08:53+00:00
|
{"license": "apache-2.0"}
|
2023-09-08T07:16:02+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Here is a collective list of instruction dataset used for Neural Chat fine-tuning. The total number of instruction samples and tokens are about 1.5M and 5M respectively.
The collective dataset has been validated on multiple LLMs (such as MPT, LLama, Llama2) by the NeuralChat team (Kaokao Lv, Wenxin Zhang, Xuhui Ren, and Haihao Shen) from Intel/SATG/AIA/AIPT. Thanks to Hello-SimpleAI, databricks, TigerResearch/TigerBot, Open-Orca for releasing the open-source instruction dataset.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
9691b1fc278d3008c3fd3c25f4f0d3c0d5fc9743
|
# Martin-test
Created from AIOD platform
|
mtkinit/Martin-test
|
[
"region:us"
] |
2023-09-08T07:17:25+00:00
|
{"pretty_name": "Martin-test"}
|
2023-09-08T07:17:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Martin-test
Created from AIOD platform
|
[
"# Martin-test\nCreated from AIOD platform"
] |
[
"TAGS\n#region-us \n",
"# Martin-test\nCreated from AIOD platform"
] |
[
6,
10
] |
[
"passage: TAGS\n#region-us \n# Martin-test\nCreated from AIOD platform"
] |
474fbeb97198ab2ac55be2dca3bbcd29d92fd834
|
# Dataset Card for "python_code_instructions_18k_alpaca-standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
khalidalt/python_code_instructions_18k_alpaca-standardized
|
[
"region:us"
] |
2023-09-08T07:18:06+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 27190878, "num_examples": 74448}], "download_size": 7233038, "dataset_size": 27190878}}
|
2023-09-08T07:18:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "python_code_instructions_18k_alpaca-standardized"
More Information needed
|
[
"# Dataset Card for \"python_code_instructions_18k_alpaca-standardized\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"python_code_instructions_18k_alpaca-standardized\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"python_code_instructions_18k_alpaca-standardized\"\n\nMore Information needed"
] |
06b78b18a6bcbfdd79c7f35208fa4e78d2d65d6a
|
# Dataset Card for "SAS_Python_Conversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ashwincv0112/SAS_Python_Conversion
|
[
"region:us"
] |
2023-09-08T07:23:28+00:00
|
{"dataset_info": {"features": [{"name": "SAS Code", "dtype": "string"}, {"name": "Converted Python Code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6362, "num_examples": 30}], "download_size": 5247, "dataset_size": 6362}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T07:23:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SAS_Python_Conversion"
More Information needed
|
[
"# Dataset Card for \"SAS_Python_Conversion\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SAS_Python_Conversion\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SAS_Python_Conversion\"\n\nMore Information needed"
] |
c43f7d248c8e3887da9933203da146db79608cdc
|
# Dataset Card for "something"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alpayariyak/something
|
[
"region:us"
] |
2023-09-08T07:23:38+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62321431, "num_examples": 56037}], "download_size": 30816818, "dataset_size": 62321431}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T07:23:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "something"
More Information needed
|
[
"# Dataset Card for \"something\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"something\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"something\"\n\nMore Information needed"
] |
32c89f7c52f16e46115d9c31e6000ddb081b3de3
|
# Dataset Card for EntityCS
- Repository: https://github.com/huawei-noah/noah-research/tree/master/NLP/EntityCS
- Paper: https://aclanthology.org/2022.findings-emnlp.499.pdf
- Point of Contact: [Fenia Christopoulou](mailto:[email protected]), [Chenxi Whitehouse](mailto:[email protected])
## Dataset Description
We use the English Wikipedia and leverage entity information from Wikidata to construct an entity-based Code Switching corpus.
To achieve this, we make use of wikilinks in Wikipedia, i.e. links from one page to another.
We use the English [Wikipedia dump](https://dumps.wikimedia.org/enwiki/latest/) (November 2021) and extract raw text with [WikiExtractor](https://github.com/attardi/wikiextractor) while keeping track of wikilinks.
Since we are interested in creating entity-level CS instances, we only keep sentences containing at least one wikilink.
Given an English sentence with wikilinks, we first map the entity in each wikilink to its corresponding Wikidata ID and
retrieve its available translations from Wikidata.
For each sentence, we check which languages have translations for all entities in that sentence, and consider those as candidates for code-switching.
We ensure all entities are code-switched to the same target language in a single sentence, avoiding noise from including too many languages.
To control the size of the corpus, we generate up to five code-switched sentences for each English sentence.
In particular, if fewer than five languages have translations available for all the entities in a sentence, we create code-switched instances with all of them.
Otherwise, we randomly select five target languages from the candidates.
If no candidate languages can be found, we do not code-switch the sentence, instead, we keep it as part of the English corpus.
Finally, we surround each entity with entity indicators (`<e>`, `</e>`).
## Supported Tasks and Leaderboards
The dataset was developped for intermediate pre-training of language models.
In the paper we further fine-tune models on entity-centric downstream tasks, such as NER.
## Languages
The dataset covers 93 languages in total, including English.
## Data Statistics
| Statistic | Count |
|:------------------------------|------------:|
| Languages | 93 |
| English Sentences | 54,469,214 |
| English Entities | 104,593,076 |
| Average Sentence Length | 23.37 |
| Average Entities per Sentence | 2 |
| CS Sentences per EN Sentence | ≤ 5 |
| CS Sentences | 231,124,422 |
| CS Entities | 420,907,878 |
## Data Fields
Each instance contains 4 fields:
- `id`: Unique ID of each sentence
- `language`: The language of choice for entity code-switching of the given sentence
- `en_sentence`: The original English sentence
- `cs_sentence`: The code-switched sentence
In the case of the English subset, the `cs_sentence` field does not exist as the sentences are not code-switched.
An example of what a data instance looks like:
```
{
'id': 19,
'en_sentence': 'The subs then enter a <en>coral reef</en> with many bright reflective colors.',
'cs_sentence': 'The subs then enter a <de>Korallenriff</de> with many bright reflective colors.',
'language': 'de'
}
```
## Data Splits
There is a single data split for each language. You can randomly select a few examples from each language to serve as validation set.
## Limitations
An important limitation of the work is that before code-switching an entity, its morphological inflection is not checked.
This can lead to potential errors as the form of the CS entity might not agree with the surrounding context (e.g. plural).
There should be few cases as such, as we are only switching entities. However, this should be improved in a later version of the corpus.
Secondly, the diversity of languages used to construct the EntityCS corpus is restricted to the overlap between the available languages in WikiData and XLM-R pre-training.
This choice was for a better comparison between models, however it is possible to extend the corpus with more languages that XLM-R does not cover, following
the procedure presented in the paper.
## Citation
**BibTeX**
```html
@inproceedings{whitehouse-etal-2022-entitycs,
title = "{E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching",
author = "Whitehouse, Chenxi and
Christopoulou, Fenia and
Iacobacci, Ignacio",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.499",
pages = "6698--6714"
}
```
**APA**
```html
Whitehouse, C., Christopoulou, F., & Iacobacci, I. (2022). EntityCS: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching. In Findings of the Association for Computational Linguistics: EMNLP 2022.
```
|
huawei-noah/entity_cs
|
[
"size_categories:100M<n<1B",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:en",
"language:el",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:my",
"language:ne",
"language:nl",
"language:nb",
"language:om",
"language:or",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:xh",
"language:yi",
"language:zh",
"license:apache-2.0",
"region:us"
] |
2023-09-08T07:44:07+00:00
|
{"language": ["af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "en", "el", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "nb", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh"], "license": "apache-2.0", "size_categories": ["100M<n<1B"]}
|
2023-09-20T06:05:07+00:00
|
[] |
[
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"en",
"el",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"nb",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh"
] |
TAGS
#size_categories-100M<n<1B #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-English #language-Modern Greek (1453-) #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Bokmål #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Serbian #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Turkish #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Xhosa #language-Yiddish #language-Chinese #license-apache-2.0 #region-us
|
Dataset Card for EntityCS
=========================
* Repository: URL
* Paper: URL
* Point of Contact: Fenia Christopoulou, Chenxi Whitehouse
Dataset Description
-------------------
We use the English Wikipedia and leverage entity information from Wikidata to construct an entity-based Code Switching corpus.
To achieve this, we make use of wikilinks in Wikipedia, i.e. links from one page to another.
We use the English Wikipedia dump (November 2021) and extract raw text with WikiExtractor while keeping track of wikilinks.
Since we are interested in creating entity-level CS instances, we only keep sentences containing at least one wikilink.
Given an English sentence with wikilinks, we first map the entity in each wikilink to its corresponding Wikidata ID and
retrieve its available translations from Wikidata.
For each sentence, we check which languages have translations for all entities in that sentence, and consider those as candidates for code-switching.
We ensure all entities are code-switched to the same target language in a single sentence, avoiding noise from including too many languages.
To control the size of the corpus, we generate up to five code-switched sentences for each English sentence.
In particular, if fewer than five languages have translations available for all the entities in a sentence, we create code-switched instances with all of them.
Otherwise, we randomly select five target languages from the candidates.
If no candidate languages can be found, we do not code-switch the sentence, instead, we keep it as part of the English corpus.
Finally, we surround each entity with entity indicators ('', '').
Supported Tasks and Leaderboards
--------------------------------
The dataset was developped for intermediate pre-training of language models.
In the paper we further fine-tune models on entity-centric downstream tasks, such as NER.
Languages
---------
The dataset covers 93 languages in total, including English.
Data Statistics
---------------
Data Fields
-----------
Each instance contains 4 fields:
* 'id': Unique ID of each sentence
* 'language': The language of choice for entity code-switching of the given sentence
* 'en\_sentence': The original English sentence
* 'cs\_sentence': The code-switched sentence
In the case of the English subset, the 'cs\_sentence' field does not exist as the sentences are not code-switched.
An example of what a data instance looks like:
Data Splits
-----------
There is a single data split for each language. You can randomly select a few examples from each language to serve as validation set.
Limitations
-----------
An important limitation of the work is that before code-switching an entity, its morphological inflection is not checked.
This can lead to potential errors as the form of the CS entity might not agree with the surrounding context (e.g. plural).
There should be few cases as such, as we are only switching entities. However, this should be improved in a later version of the corpus.
Secondly, the diversity of languages used to construct the EntityCS corpus is restricted to the overlap between the available languages in WikiData and XLM-R pre-training.
This choice was for a better comparison between models, however it is possible to extend the corpus with more languages that XLM-R does not cover, following
the procedure presented in the paper.
BibTeX
APA
|
[] |
[
"TAGS\n#size_categories-100M<n<1B #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-English #language-Modern Greek (1453-) #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian Bokmål #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Serbian #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Turkish #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Xhosa #language-Yiddish #language-Chinese #license-apache-2.0 #region-us \n"
] |
[
567
] |
[
"passage: "
] |
550bf2a482042f78f8239b23af29e03dbf1bd5b2
|
# Dataset Card for "autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T08:06:46+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 948866270, "dataset_size": 2600840000}}
|
2023-09-08T08:07:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
39
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_house_16H_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
19ab07e1332d50da712fce043473da631ee23f4f
|
# Bangumi Image Base of Kara No Kyoukai
This is the image base of bangumi Kara no Kyoukai, we detected 20 characters, 1626 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 415 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 79 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 49 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 400 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 50 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 21 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 15 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 20 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 111 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 24 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 28 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 64 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 27 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 156 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 11 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 23 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 20 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 49 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 13 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 51 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/karanokyoukai
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-08T08:07:15+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T05:35:33+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Kara No Kyoukai
=====================================
This is the image base of bangumi Kara no Kyoukai, we detected 20 characters, 1626 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
a258649f2f6db2f8c2a396521fcef8abd1c2d5e5
|
# Ripe Strawberries Detection
The dataset consists of photos of strawberries for the identification and recognition of **ripe berries**.
The images are annotated with **bounding boxes** that accurately demarcate the location of the ripe strawberries within the image.
Each image in the dataset showcases a strawberry plantation, and includes a diverse range of *backgrounds, lighting conditions, and orientations*. The photos are captured from various *angles and distances*, providing a realistic representation of strawberries.
The dataset can be utilised for enabling advancements in *strawberry production, quality control, and greater precision in agricultural practices*.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=ripe-strawberries-detection) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of original images of strawberries
- **boxes** - includes bounding box labeling for the original images
- **annotations.xml** - contains coordinates of the bounding boxes and labels, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for ripe strawberries detection. For each point, the x and y coordinates are provided. Visibility of the ripe strawberry is also provided by the attribute **occluded** (0, 1).
# Example of XML file structure

# Strawberry Detection might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=ripe-strawberries-detection) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/ripe-strawberries-detection
|
[
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"biology",
"region:us"
] |
2023-09-08T08:29:07+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-classification", "image-to-image", "object-detection"], "tags": ["code", "biology"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "shapes", "sequence": [{"name": "label", "dtype": {"class_label": {"names": {"0": "strawberry"}}}}, {"name": "type", "dtype": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "float32"}, {"name": "attributes", "sequence": [{"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 127730244, "num_examples": 40}], "download_size": 126412271, "dataset_size": 127730244}}
|
2023-09-26T07:38:14+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-classification #task_categories-image-to-image #task_categories-object-detection #language-English #license-cc-by-nc-nd-4.0 #code #biology #region-us
|
# Ripe Strawberries Detection
The dataset consists of photos of strawberries for the identification and recognition of ripe berries.
The images are annotated with bounding boxes that accurately demarcate the location of the ripe strawberries within the image.
Each image in the dataset showcases a strawberry plantation, and includes a diverse range of *backgrounds, lighting conditions, and orientations*. The photos are captured from various *angles and distances*, providing a realistic representation of strawberries.
The dataset can be utilised for enabling advancements in *strawberry production, quality control, and greater precision in agricultural practices*.
.
# Example of XML file structure
.",
"# Example of XML file structure\n.",
"# Example of XML file structure\n.# Example of XML file structure\n
|
lklimkiewicz/ds1000-instruction-output
|
[
"region:us"
] |
2023-09-08T08:50:48+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1160376, "num_examples": 1000}], "download_size": 452997, "dataset_size": 1160376}}
|
2023-09-08T08:50:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ds1000-instruction-output"
More Information needed
|
[
"# Dataset Card for \"ds1000-instruction-output\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ds1000-instruction-output\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ds1000-instruction-output\"\n\nMore Information needed"
] |
7d0db2432fb882c081766a502ab5798ea3c5a249
|
# Shor_Sentiment_Dataset
Created from AIOD platform
|
mtkinit/Shor_Sentiment_Dataset
|
[
"sentiment",
"region:us"
] |
2023-09-08T09:09:26+00:00
|
{"pretty_name": "Shor_Sentiment_Dataset", "tags": ["sentiment"]}
|
2023-09-08T09:09:27+00:00
|
[] |
[] |
TAGS
#sentiment #region-us
|
# Shor_Sentiment_Dataset
Created from AIOD platform
|
[
"# Shor_Sentiment_Dataset\nCreated from AIOD platform"
] |
[
"TAGS\n#sentiment #region-us \n",
"# Shor_Sentiment_Dataset\nCreated from AIOD platform"
] |
[
9,
16
] |
[
"passage: TAGS\n#sentiment #region-us \n# Shor_Sentiment_Dataset\nCreated from AIOD platform"
] |
d89a86fd56e54f7c4c24ab202bb6ec9b7c8ff0aa
|
# Dataset Card for "cls-slu-aug-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Cherishh/cls-slu-aug-v1
|
[
"region:us"
] |
2023-09-08T09:49:48+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Ki\u1ec3m tra t\u00ecnh tr\u1ea1ng thi\u1ebft b\u1ecb", "1": "B\u1eadt thi\u1ebft b\u1ecb", "2": "M\u1edf thi\u1ebft b\u1ecb", "3": "\u0110\u00f3ng thi\u1ebft b\u1ecb", "4": "Gi\u1ea3m \u0111\u1ed9 s\u00e1ng c\u1ee7a thi\u1ebft b\u1ecb", "5": "T\u0103ng m\u1ee9c \u0111\u1ed9 c\u1ee7a thi\u1ebft b\u1ecb", "6": "T\u0103ng \u0111\u1ed9 s\u00e1ng c\u1ee7a thi\u1ebft b\u1ecb", "7": "T\u1eaft thi\u1ebft b\u1ecb", "8": "T\u0103ng nhi\u1ec7t \u0111\u1ed9 c\u1ee7a thi\u1ebft b\u1ecb", "9": "Gi\u1ea3m m\u1ee9c \u0111\u1ed9 c\u1ee7a thi\u1ebft b\u1ecb", "10": "Gi\u1ea3m \u00e2m l\u01b0\u1ee3ng c\u1ee7a thi\u1ebft b\u1ecb", "11": "Gi\u1ea3m nhi\u1ec7t \u0111\u1ed9 c\u1ee7a thi\u1ebft b\u1ecb", "12": "T\u0103ng \u00e2m l\u01b0\u1ee3ng c\u1ee7a thi\u1ebft b\u1ecb", "13": "H\u1ee7y ho\u1ea1t c\u1ea3nh", "14": "K\u00edch ho\u1ea1t c\u1ea3nh"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 3074283, "num_examples": 17630}, {"name": "val", "num_bytes": 344317, "num_examples": 1959}, {"name": "test", "num_bytes": 128301, "num_examples": 749}], "download_size": 1076953, "dataset_size": 3546901}}
|
2023-09-08T09:49:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cls-slu-aug-v1"
More Information needed
|
[
"# Dataset Card for \"cls-slu-aug-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cls-slu-aug-v1\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cls-slu-aug-v1\"\n\nMore Information needed"
] |
7f3a18e711ecbba514c08557440a92f06843fc73
|
# Dataset Card for "ner-slu-aug-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Cherishh/ner-slu-aug-v1
|
[
"region:us"
] |
2023-09-08T09:50:19+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3933942, "num_examples": 17604}, {"name": "val", "num_bytes": 439860, "num_examples": 1957}, {"name": "test", "num_bytes": 163660, "num_examples": 749}], "download_size": 649747, "dataset_size": 4537462}}
|
2023-09-09T03:47:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ner-slu-aug-v1"
More Information needed
|
[
"# Dataset Card for \"ner-slu-aug-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ner-slu-aug-v1\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ner-slu-aug-v1\"\n\nMore Information needed"
] |
add3917614048951171cbd3aeeb7810dbf592b4c
|
This file contains embeddings of English text of Bhagwad Gita Chapter 2 from this site (https://vedabase.io/en/library/bg/2/)
generated using sentence-transformer (sentence-transformers/all-MiniLM-L6-v2) at Hugging Face.
|
nimrita/BhagwadGitaChapter2Embeddings
|
[
"license:afl-3.0",
"region:us"
] |
2023-09-08T10:00:21+00:00
|
{"license": "afl-3.0"}
|
2023-09-09T02:51:35+00:00
|
[] |
[] |
TAGS
#license-afl-3.0 #region-us
|
This file contains embeddings of English text of Bhagwad Gita Chapter 2 from this site (URL
generated using sentence-transformer (sentence-transformers/all-MiniLM-L6-v2) at Hugging Face.
|
[] |
[
"TAGS\n#license-afl-3.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-afl-3.0 #region-us \n"
] |
6f43c0a8b8e4894553f30b6e234c33e280722a78
|
# Dataset Card for "llama2-SST2-SFT-with-system-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
OneFly7/llama2-SST2-SFT-with-system-prompt
|
[
"region:us"
] |
2023-09-08T10:15:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "label_text", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22798484, "num_examples": 67349}, {"name": "validation", "num_bytes": 329484, "num_examples": 872}], "download_size": 4382265, "dataset_size": 23127968}}
|
2023-09-08T10:15:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama2-SST2-SFT-with-system-prompt"
More Information needed
|
[
"# Dataset Card for \"llama2-SST2-SFT-with-system-prompt\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama2-SST2-SFT-with-system-prompt\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama2-SST2-SFT-with-system-prompt\"\n\nMore Information needed"
] |
4d13035789a193b04fb97b32bc1ddac62e965faf
|
# Dataset Card for "CsFEVERv2"
## Dataset Description
CsFEVERv2_pvi is a dataset for Czech fact-checking (NLI) developed as part of a bachelor thesis at the Artificial Intelligence Center of the Faculty of Electrical Engineering of
the Czech technical university in Prague.
### Languages
Czech
## Dataset Usage Example
```python
from datasets import load_dataset
dataset = load_dataset("/home/mlynatom/csfever_v2_pvi")
```
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{'id': 155439,
'label': 2,
'claim': 'Newcastle United FC vyhrál pět ligových titulů.',
'evidence': "Ronnie Simpson. Ronnie Simpson (21. října 1930, Glasgow – 19. dubna 2004, Edinburgh) byl skotský fotbalový brankář..."}
```
### Data Fields
- `id`: a `int32` feature.
- `label`: a `int32` feature.
- `claim`: a `string` feature.
- `evidence`: a `string` feature.
### Data Splits
| | train | dev | test |
|----------|-------:|-----:|------:|
| num_rows | 106209 | 6319 | 6261 |
# Citation
```bibtex
@article{Ullrich_2023,
doi = {10.1007/s10579-023-09654-3},
url = {https://doi.org/10.1007%2Fs10579-023-09654-3},
year = 2023,
month = {may},
publisher = {Springer Science and Business Media {LLC}},
author = {Herbert Ullrich and Jan Drchal and Martin Rýpar and Hana Vincourová and Václav Moravec},
title = {{CsFEVER} and {CTKFacts}: acquiring Czech data for fact verification},
journal = {Language Resources and Evaluation},
archivePrefix={arXiv},
eprint={2201.11115},
}
```
```bibtex
@misc{ethayarajh2022understanding,
title={Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information},
author={Kawin Ethayarajh and Yejin Choi and Swabha Swayamdipta},
year={2022},
eprint={2110.08420},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@thesis{Mlynar_2023,
author = {Mlynář, Tomáš},
type = {Bachelor's Thesis}
title = {Automated Fact Checking Based on Czech Wikipedia},
institution = {Czech Technical University in Prague, Faculty of Electrical Engineering},
date = {2023},
url = {http://hdl.handle.net/10467/109219}
}
```
|
ctu-aic/csfever_v2_pvi
|
[
"task_categories:text-classification",
"task_ids:natural-language-inference",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:fever",
"language:cs",
"license:cc-by-sa-3.0",
"Fact-checking",
"arxiv:2201.11115",
"arxiv:2110.08420",
"region:us"
] |
2023-09-08T10:26:03+00:00
|
{"language": ["cs"], "license": "cc-by-sa-3.0", "multilinguality": "monolingual", "size_categories": ["100K<n<1M"], "source_datasets": "fever", "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "CsFEVERv2-PVI", "tags": ["Fact-checking"]}
|
2023-09-08T10:33:32+00:00
|
[
"2201.11115",
"2110.08420"
] |
[
"cs"
] |
TAGS
#task_categories-text-classification #task_ids-natural-language-inference #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-fever #language-Czech #license-cc-by-sa-3.0 #Fact-checking #arxiv-2201.11115 #arxiv-2110.08420 #region-us
|
Dataset Card for "CsFEVERv2"
============================
Dataset Description
-------------------
CsFEVERv2\_pvi is a dataset for Czech fact-checking (NLI) developed as part of a bachelor thesis at the Artificial Intelligence Center of the Faculty of Electrical Engineering of
the Czech technical university in Prague.
### Languages
Czech
Dataset Usage Example
---------------------
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Data Fields
* 'id': a 'int32' feature.
* 'label': a 'int32' feature.
* 'claim': a 'string' feature.
* 'evidence': a 'string' feature.
### Data Splits
|
[
"### Languages\n\n\nCzech\n\n\nDataset Usage Example\n---------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'int32' feature.\n* 'label': a 'int32' feature.\n* 'claim': a 'string' feature.\n* 'evidence': a 'string' feature.",
"### Data Splits"
] |
[
"TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-fever #language-Czech #license-cc-by-sa-3.0 #Fact-checking #arxiv-2201.11115 #arxiv-2110.08420 #region-us \n",
"### Languages\n\n\nCzech\n\n\nDataset Usage Example\n---------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'int32' feature.\n* 'label': a 'int32' feature.\n* 'claim': a 'string' feature.\n* 'evidence': a 'string' feature.",
"### Data Splits"
] |
[
100,
21,
18,
53,
5
] |
[
"passage: TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-fever #language-Czech #license-cc-by-sa-3.0 #Fact-checking #arxiv-2201.11115 #arxiv-2110.08420 #region-us \n### Languages\n\n\nCzech\n\n\nDataset Usage Example\n---------------------\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\n* 'id': a 'int32' feature.\n* 'label': a 'int32' feature.\n* 'claim': a 'string' feature.\n* 'evidence': a 'string' feature.### Data Splits"
] |
8c6ce2bdf9dc835f9f4b130ccbb9c321a4b65de9
|
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot
|
[
"region:us"
] |
2023-09-08T10:37:07+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "helpfulness_chosen", "dtype": "int64"}, {"name": "helpfulness_rejected", "dtype": "int64"}, {"name": "specificity_chosen", "dtype": "int64"}, {"name": "specificity_rejected", "dtype": "int64"}, {"name": "intent_chosen", "dtype": "int64"}, {"name": "intent_rejected", "dtype": "int64"}, {"name": "factuality_chosen", "dtype": "int64"}, {"name": "factuality_rejected", "dtype": "int64"}, {"name": "easy-to-understand_chosen", "dtype": "int64"}, {"name": "easy-to-understand_rejected", "dtype": "int64"}, {"name": "relevance_chosen", "dtype": "int64"}, {"name": "relevance_rejected", "dtype": "int64"}, {"name": "readability_chosen", "dtype": "int64"}, {"name": "readability_rejected", "dtype": "int64"}, {"name": "enough-detail_chosen", "dtype": "int64"}, {"name": "enough-detail_rejected", "dtype": "int64"}, {"name": "biased:_chosen", "dtype": "int64"}, {"name": "biased:_rejected", "dtype": "int64"}, {"name": "fail-to-consider-individual-preferences_chosen", "dtype": "int64"}, {"name": "fail-to-consider-individual-preferences_rejected", "dtype": "int64"}, {"name": "repetetive_chosen", "dtype": "int64"}, {"name": "repetetive_rejected", "dtype": "int64"}, {"name": "fail-to-consider-context_chosen", "dtype": "int64"}, {"name": "fail-to-consider-context_rejected", "dtype": "int64"}, {"name": "too-long_chosen", "dtype": "int64"}, {"name": "too-long_rejected", "dtype": "int64"}, {"name": "human", "dtype": "string"}, {"name": "assistant_chosen", "dtype": "string"}, {"name": "assistant_rejected", "dtype": "string"}, {"name": "log_score_chosen", "dtype": "float64"}, {"name": "log_score_rejected", "dtype": "float64"}, {"name": "labels", "dtype": "string"}, {"name": "zeroshot_helpfulness_chosen", "dtype": "int64"}, {"name": "zeroshot_helpfulness_rejected", "dtype": "int64"}, {"name": "zeroshot_specificity_chosen", "dtype": "int64"}, {"name": "zeroshot_specificity_rejected", "dtype": "int64"}, {"name": "zeroshot_intent_chosen", "dtype": "int64"}, {"name": "zeroshot_intent_rejected", "dtype": "int64"}, {"name": "zeroshot_factuality_chosen", "dtype": "int64"}, {"name": "zeroshot_factuality_rejected", "dtype": "int64"}, {"name": "zeroshot_easy-to-understand_chosen", "dtype": "int64"}, {"name": "zeroshot_easy-to-understand_rejected", "dtype": "int64"}, {"name": "zeroshot_relevance_chosen", "dtype": "int64"}, {"name": "zeroshot_relevance_rejected", "dtype": "int64"}, {"name": "zeroshot_readability_chosen", "dtype": "int64"}, {"name": "zeroshot_readability_rejected", "dtype": "int64"}, {"name": "zeroshot_enough-detail_chosen", "dtype": "int64"}, {"name": "zeroshot_enough-detail_rejected", "dtype": "int64"}, {"name": "zeroshot_biased:_chosen", "dtype": "int64"}, {"name": "zeroshot_biased:_rejected", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_chosen", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_rejected", "dtype": "int64"}, {"name": "zeroshot_repetetive_chosen", "dtype": "int64"}, {"name": "zeroshot_repetetive_rejected", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-context_chosen", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-context_rejected", "dtype": "int64"}, {"name": "zeroshot_too-long_chosen", "dtype": "int64"}, {"name": "zeroshot_too-long_rejected", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16425816, "num_examples": 9574}, {"name": "test", "num_bytes": 16369741, "num_examples": 9574}], "download_size": 16115109, "dataset_size": 32795557}}
|
2023-09-08T10:37:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot"
More Information needed
|
[
"# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
6,
36
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
0e4f864f220e00e48001f5028ea66e59b514212d
|
# Dataset Card for "hh-generated_flan_t5_large_flan_t5_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/hh-generated_flan_t5_large_flan_t5_zeroshot
|
[
"region:us"
] |
2023-09-08T10:53:41+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "zeroshot_helpfulness", "dtype": "float64"}, {"name": "zeroshot_specificity", "dtype": "float64"}, {"name": "zeroshot_intent", "dtype": "int64"}, {"name": "zeroshot_factuality", "dtype": "int64"}, {"name": "zeroshot_easy-to-understand", "dtype": "int64"}, {"name": "zeroshot_relevance", "dtype": "int64"}, {"name": "zeroshot_readability", "dtype": "float64"}, {"name": "zeroshot_enough-detail", "dtype": "float64"}, {"name": "zeroshot_biased:", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-individual-preferences", "dtype": "int64"}, {"name": "zeroshot_repetetive", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-context", "dtype": "int64"}, {"name": "zeroshot_too-long", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 6352357, "num_examples": 25600}], "download_size": 798475, "dataset_size": 6352357}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T10:53:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hh-generated_flan_t5_large_flan_t5_zeroshot"
More Information needed
|
[
"# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
6,
30
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
e80248af8879ee8f07ea9d83b805242e4d5c81c6
|
# fin-eng-dataset
# Updated 29th October 2023
New version. Covers around 30K individual words and around 10K sentences, phrases etc.
# Updated 19th September 2023
New version. Over 20K unique words and over 2K sentences/paragraphs fin-eng versions.
# Updated 10th September 2023
Updated version.
Around 15K different words and a couple of thousands of sentences, paragraphs, quots, questions and answers.
# English
The file fine-eng-dataset.json contains over 9000 individual Finnish words with their English translations. Since some of the words are names of places, people, etc., the exact number of Finnish words is unknown.
Part of the data includes a list of Finnish words along with their English translations. However, the majority of the data consists of Finnish sentences, questions, statements, etc., that have been translated into English.
The data begins with a list of the thousand most common Finnish words with their translations. Following that are sentences, including quotes from Martti Ahtisaari, Public Domain books like "Open Life," Maila Talvio's "The Destruction of Dark Cabin," as well as sentences from free novellas "Midsummer Gift for Readers" and "Erotic Novella: Towards Malaysia."
In addition, sentences, quotes from movies, basic sentences produced by artificial intelligence, personal messages, etc., have been added, totaling over a thousand entries. Random paragraphs from Finnish Wikipedia's "random article" have also been included.
The work is intended to continue indefinitely. Help is needed; please contact [email protected].
# Suomeksi
fine-eng-dataset.json sisältää yli 9000 yksittäistä suomenkielistä sanaa englanninkielisenä käännöksenään. Koska osa sanoista on paikkojen-, ihmisten-, jne, nimiä niin tarkkaa määrää suomenkielisestä sanoista ei tiedetä.
Osassa dataa on syötetty lista suomenkielisiä sanoja sekä niiden englanninkieliset käännökset. Suurin osa datasta on kuitenkin suomenkielisiä lauseita, kysymyksiä, toteamuksia jne. jotka on käännetty englanniksi.
Data alkaa luettelolla tuhannesta yleisimmästä suomenkilisestä sanasta käännöksineen. Tämän jälkeen tulee lauseita, mm. lainauksia Martti Ahtesaaresta, Public Domain kirjoista "Avoin Elämä", Maila Talvion "Pimeänpirtin hävitys", sekä lauseita ilmaisista novelleista "Juhannustalahja lukijoille" ja "Erottiinen novelli: Kohti Malesiaa".
Lisäksi on syötetty lauseita, lainauksia elokuvista, tekoälyn tuottamia peruslauseita, omia viestejä jne. kaiken kaikkiaan yli tuhannen kappaleen verran sekä otettu satunnaisia kappaleita suomenkielisestä wikipediasta "satunnainen artikkeli".
Tarkoitus on jatkaa työtä toistaiseksi. Apua tarvitaan, ota yhteyttä [email protected]
|
EkBass/fin-eng-dataset
|
[
"task_categories:translation",
"language:fi",
"language:en",
"license:gpl-3.0",
"text",
"translation",
"finnish",
"english",
"region:us"
] |
2023-09-08T11:31:35+00:00
|
{"language": ["fi", "en"], "license": "gpl-3.0", "task_categories": ["translation"], "pretty_name": "fin-eng-dataset-6k", "tags": ["text", "translation", "finnish", "english"]}
|
2023-10-29T08:32:14+00:00
|
[] |
[
"fi",
"en"
] |
TAGS
#task_categories-translation #language-Finnish #language-English #license-gpl-3.0 #text #translation #finnish #english #region-us
|
# fin-eng-dataset
# Updated 29th October 2023
New version. Covers around 30K individual words and around 10K sentences, phrases etc.
# Updated 19th September 2023
New version. Over 20K unique words and over 2K sentences/paragraphs fin-eng versions.
# Updated 10th September 2023
Updated version.
Around 15K different words and a couple of thousands of sentences, paragraphs, quots, questions and answers.
# English
The file URL contains over 9000 individual Finnish words with their English translations. Since some of the words are names of places, people, etc., the exact number of Finnish words is unknown.
Part of the data includes a list of Finnish words along with their English translations. However, the majority of the data consists of Finnish sentences, questions, statements, etc., that have been translated into English.
The data begins with a list of the thousand most common Finnish words with their translations. Following that are sentences, including quotes from Martti Ahtisaari, Public Domain books like "Open Life," Maila Talvio's "The Destruction of Dark Cabin," as well as sentences from free novellas "Midsummer Gift for Readers" and "Erotic Novella: Towards Malaysia."
In addition, sentences, quotes from movies, basic sentences produced by artificial intelligence, personal messages, etc., have been added, totaling over a thousand entries. Random paragraphs from Finnish Wikipedia's "random article" have also been included.
The work is intended to continue indefinitely. Help is needed; please contact krisu.virtanen@URL.
# Suomeksi
URL sisältää yli 9000 yksittäistä suomenkielistä sanaa englanninkielisenä käännöksenään. Koska osa sanoista on paikkojen-, ihmisten-, jne, nimiä niin tarkkaa määrää suomenkielisestä sanoista ei tiedetä.
Osassa dataa on syötetty lista suomenkielisiä sanoja sekä niiden englanninkieliset käännökset. Suurin osa datasta on kuitenkin suomenkielisiä lauseita, kysymyksiä, toteamuksia jne. jotka on käännetty englanniksi.
Data alkaa luettelolla tuhannesta yleisimmästä suomenkilisestä sanasta käännöksineen. Tämän jälkeen tulee lauseita, mm. lainauksia Martti Ahtesaaresta, Public Domain kirjoista "Avoin Elämä", Maila Talvion "Pimeänpirtin hävitys", sekä lauseita ilmaisista novelleista "Juhannustalahja lukijoille" ja "Erottiinen novelli: Kohti Malesiaa".
Lisäksi on syötetty lauseita, lainauksia elokuvista, tekoälyn tuottamia peruslauseita, omia viestejä jne. kaiken kaikkiaan yli tuhannen kappaleen verran sekä otettu satunnaisia kappaleita suomenkielisestä wikipediasta "satunnainen artikkeli".
Tarkoitus on jatkaa työtä toistaiseksi. Apua tarvitaan, ota yhteyttä krisu.virtanen@URL
|
[
"# fin-eng-dataset",
"# Updated 29th October 2023\nNew version. Covers around 30K individual words and around 10K sentences, phrases etc.",
"# Updated 19th September 2023\nNew version. Over 20K unique words and over 2K sentences/paragraphs fin-eng versions.",
"# Updated 10th September 2023\nUpdated version.\nAround 15K different words and a couple of thousands of sentences, paragraphs, quots, questions and answers.",
"# English\nThe file URL contains over 9000 individual Finnish words with their English translations. Since some of the words are names of places, people, etc., the exact number of Finnish words is unknown.\nPart of the data includes a list of Finnish words along with their English translations. However, the majority of the data consists of Finnish sentences, questions, statements, etc., that have been translated into English.\nThe data begins with a list of the thousand most common Finnish words with their translations. Following that are sentences, including quotes from Martti Ahtisaari, Public Domain books like \"Open Life,\" Maila Talvio's \"The Destruction of Dark Cabin,\" as well as sentences from free novellas \"Midsummer Gift for Readers\" and \"Erotic Novella: Towards Malaysia.\"\nIn addition, sentences, quotes from movies, basic sentences produced by artificial intelligence, personal messages, etc., have been added, totaling over a thousand entries. Random paragraphs from Finnish Wikipedia's \"random article\" have also been included.\nThe work is intended to continue indefinitely. Help is needed; please contact krisu.virtanen@URL.",
"# Suomeksi\nURL sisältää yli 9000 yksittäistä suomenkielistä sanaa englanninkielisenä käännöksenään. Koska osa sanoista on paikkojen-, ihmisten-, jne, nimiä niin tarkkaa määrää suomenkielisestä sanoista ei tiedetä.\nOsassa dataa on syötetty lista suomenkielisiä sanoja sekä niiden englanninkieliset käännökset. Suurin osa datasta on kuitenkin suomenkielisiä lauseita, kysymyksiä, toteamuksia jne. jotka on käännetty englanniksi.\n\nData alkaa luettelolla tuhannesta yleisimmästä suomenkilisestä sanasta käännöksineen. Tämän jälkeen tulee lauseita, mm. lainauksia Martti Ahtesaaresta, Public Domain kirjoista \"Avoin Elämä\", Maila Talvion \"Pimeänpirtin hävitys\", sekä lauseita ilmaisista novelleista \"Juhannustalahja lukijoille\" ja \"Erottiinen novelli: Kohti Malesiaa\".\n\nLisäksi on syötetty lauseita, lainauksia elokuvista, tekoälyn tuottamia peruslauseita, omia viestejä jne. kaiken kaikkiaan yli tuhannen kappaleen verran sekä otettu satunnaisia kappaleita suomenkielisestä wikipediasta \"satunnainen artikkeli\".\nTarkoitus on jatkaa työtä toistaiseksi. Apua tarvitaan, ota yhteyttä krisu.virtanen@URL"
] |
[
"TAGS\n#task_categories-translation #language-Finnish #language-English #license-gpl-3.0 #text #translation #finnish #english #region-us \n",
"# fin-eng-dataset",
"# Updated 29th October 2023\nNew version. Covers around 30K individual words and around 10K sentences, phrases etc.",
"# Updated 19th September 2023\nNew version. Over 20K unique words and over 2K sentences/paragraphs fin-eng versions.",
"# Updated 10th September 2023\nUpdated version.\nAround 15K different words and a couple of thousands of sentences, paragraphs, quots, questions and answers.",
"# English\nThe file URL contains over 9000 individual Finnish words with their English translations. Since some of the words are names of places, people, etc., the exact number of Finnish words is unknown.\nPart of the data includes a list of Finnish words along with their English translations. However, the majority of the data consists of Finnish sentences, questions, statements, etc., that have been translated into English.\nThe data begins with a list of the thousand most common Finnish words with their translations. Following that are sentences, including quotes from Martti Ahtisaari, Public Domain books like \"Open Life,\" Maila Talvio's \"The Destruction of Dark Cabin,\" as well as sentences from free novellas \"Midsummer Gift for Readers\" and \"Erotic Novella: Towards Malaysia.\"\nIn addition, sentences, quotes from movies, basic sentences produced by artificial intelligence, personal messages, etc., have been added, totaling over a thousand entries. Random paragraphs from Finnish Wikipedia's \"random article\" have also been included.\nThe work is intended to continue indefinitely. Help is needed; please contact krisu.virtanen@URL.",
"# Suomeksi\nURL sisältää yli 9000 yksittäistä suomenkielistä sanaa englanninkielisenä käännöksenään. Koska osa sanoista on paikkojen-, ihmisten-, jne, nimiä niin tarkkaa määrää suomenkielisestä sanoista ei tiedetä.\nOsassa dataa on syötetty lista suomenkielisiä sanoja sekä niiden englanninkieliset käännökset. Suurin osa datasta on kuitenkin suomenkielisiä lauseita, kysymyksiä, toteamuksia jne. jotka on käännetty englanniksi.\n\nData alkaa luettelolla tuhannesta yleisimmästä suomenkilisestä sanasta käännöksineen. Tämän jälkeen tulee lauseita, mm. lainauksia Martti Ahtesaaresta, Public Domain kirjoista \"Avoin Elämä\", Maila Talvion \"Pimeänpirtin hävitys\", sekä lauseita ilmaisista novelleista \"Juhannustalahja lukijoille\" ja \"Erottiinen novelli: Kohti Malesiaa\".\n\nLisäksi on syötetty lauseita, lainauksia elokuvista, tekoälyn tuottamia peruslauseita, omia viestejä jne. kaiken kaikkiaan yli tuhannen kappaleen verran sekä otettu satunnaisia kappaleita suomenkielisestä wikipediasta \"satunnainen artikkeli\".\nTarkoitus on jatkaa työtä toistaiseksi. Apua tarvitaan, ota yhteyttä krisu.virtanen@URL"
] |
[
44,
7,
28,
31,
37,
272,
290
] |
[
"passage: TAGS\n#task_categories-translation #language-Finnish #language-English #license-gpl-3.0 #text #translation #finnish #english #region-us \n# fin-eng-dataset# Updated 29th October 2023\nNew version. Covers around 30K individual words and around 10K sentences, phrases etc.# Updated 19th September 2023\nNew version. Over 20K unique words and over 2K sentences/paragraphs fin-eng versions.# Updated 10th September 2023\nUpdated version.\nAround 15K different words and a couple of thousands of sentences, paragraphs, quots, questions and answers.# English\nThe file URL contains over 9000 individual Finnish words with their English translations. Since some of the words are names of places, people, etc., the exact number of Finnish words is unknown.\nPart of the data includes a list of Finnish words along with their English translations. However, the majority of the data consists of Finnish sentences, questions, statements, etc., that have been translated into English.\nThe data begins with a list of the thousand most common Finnish words with their translations. Following that are sentences, including quotes from Martti Ahtisaari, Public Domain books like \"Open Life,\" Maila Talvio's \"The Destruction of Dark Cabin,\" as well as sentences from free novellas \"Midsummer Gift for Readers\" and \"Erotic Novella: Towards Malaysia.\"\nIn addition, sentences, quotes from movies, basic sentences produced by artificial intelligence, personal messages, etc., have been added, totaling over a thousand entries. Random paragraphs from Finnish Wikipedia's \"random article\" have also been included.\nThe work is intended to continue indefinitely. Help is needed; please contact krisu.virtanen@URL."
] |
03914ec948ff81354d31fe68d104bdcc96709c53
|
# Dataset Card for "3d-school_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/3d-school_prompts
|
[
"region:us"
] |
2023-09-08T11:43:46+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7611567, "num_examples": 10000}], "download_size": 824978, "dataset_size": 7611567}}
|
2023-09-08T11:43:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "3d-school_prompts"
More Information needed
|
[
"# Dataset Card for \"3d-school_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"3d-school_prompts\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"3d-school_prompts\"\n\nMore Information needed"
] |
85344fc432fff7534a1b5d108eb3decd3bdcd7b8
|
# Super-sentiment
Created from AIOD platform
|
mtkinit/Super-sentiment
|
[
"region:us"
] |
2023-09-08T12:03:30+00:00
|
{"pretty_name": "Super-sentiment"}
|
2023-09-08T12:03:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Super-sentiment
Created from AIOD platform
|
[
"# Super-sentiment\nCreated from AIOD platform"
] |
[
"TAGS\n#region-us \n",
"# Super-sentiment\nCreated from AIOD platform"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Super-sentiment\nCreated from AIOD platform"
] |
b057b0a1fe8be3f6dfa3ed3f4cb86efa3c8ec243
|
# Megadiff, a dataset of source code changes
If you use Megadiff, please cite the following technical report:
"[Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size](http://arxiv.org/pdf/2108.04631)". Technical Report 2108.04631, Arxiv; 2021.
```
@techreport{megadiff,
TITLE = {{Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size}},
AUTHOR = {Martin Monperrus and Matias Martinez and He Ye and Fernanda Madeiral and Thomas Durieux and Zhongxing Yu},
URL = {http://arxiv.org/pdf/2108.04631},
INSTITUTION = {Arxiv},
NUMBER = {2108.04631},
YEAR = {2021},
}
```
|
ASSERT-KTH/megadiff
|
[
"size_categories:100K<n<1M",
"language:code",
"arxiv:2108.04631",
"region:us"
] |
2023-09-08T12:37:13+00:00
|
{"language": ["code"], "size_categories": ["100K<n<1M"], "pretty_name": "megadiff", "dataset_info": {"features": [{"name": "diff", "dtype": "string"}, {"name": "is_single_chunk", "dtype": "bool"}, {"name": "is_single_function", "dtype": "bool"}, {"name": "buggy_function", "dtype": "string"}, {"name": "fixed_function", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16904390254, "num_examples": 656785}], "download_size": 5369285762, "dataset_size": 16904390254}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T12:56:32+00:00
|
[
"2108.04631"
] |
[
"code"
] |
TAGS
#size_categories-100K<n<1M #language-code #arxiv-2108.04631 #region-us
|
# Megadiff, a dataset of source code changes
If you use Megadiff, please cite the following technical report:
"Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size". Technical Report 2108.04631, Arxiv; 2021.
|
[
"# Megadiff, a dataset of source code changes\n\nIf you use Megadiff, please cite the following technical report:\n\n\"Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size\". Technical Report 2108.04631, Arxiv; 2021."
] |
[
"TAGS\n#size_categories-100K<n<1M #language-code #arxiv-2108.04631 #region-us \n",
"# Megadiff, a dataset of source code changes\n\nIf you use Megadiff, please cite the following technical report:\n\n\"Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size\". Technical Report 2108.04631, Arxiv; 2021."
] |
[
31,
62
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #language-code #arxiv-2108.04631 #region-us \n# Megadiff, a dataset of source code changes\n\nIf you use Megadiff, please cite the following technical report:\n\n\"Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size\". Technical Report 2108.04631, Arxiv; 2021."
] |
bce541e23a4ef90433fa16a490560050d1ceadb7
|
[Tigerbot](https://github.com/TigerResearch/TigerBot) 开源项目中微调中文sft-zh数据合集
本合集涵盖本组织下开源的其他中文sft-中文-数据集,不需要重复下载
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/sft_zh')
```
## 文件细分
| 类型 | 语言 | 数据集文件 | 数量|
| ------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| alpaca 中文 | 中文 | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-alpaca-zh-0.5m.json) | 500k |
| 百科问答 | 中文 | [tigerbot-wiki-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-wiki-qa-zh-1k.json) | 1k |
| 名著问答 | 中文 | [tigerbot-book-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-book-qa-1k.json) | 1k |
| 猜谜语 | 中文 | [tigerbot-riddle-qa-1k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-riddle-qa-1k.json) | 1k |
| 阅读理解 | 中文 | [tigerbot-superclue-c3-zh-5k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-superclue-c3-zh-5k.json) | 5k |
| 问答 | 中文 | [tigerbot-hc3-zh-12k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-hc3-zh-12k.json) | 12k |
| 知乎问答 | 中文 | [tigerbot-zhihu-zh-10k](https://huggingface.co/datasets/TigerResearch/sft_zh/blob/main/tigerbot-zhihu-zh-10k.json) | 10k |
| 流萤sft | 中文 | [tigerbot-firefly-zh-20k](https://huggingface.co/datasets/TigerResearch/tigerbot-firefly-zh-20k) | 20k |
|
ticoAg/tiger-sft-zh
|
[
"language:zh",
"license:apache-2.0",
"region:us"
] |
2023-09-08T12:52:14+00:00
|
{"language": ["zh"], "license": "apache-2.0"}
|
2023-09-08T12:56:58+00:00
|
[] |
[
"zh"
] |
TAGS
#language-Chinese #license-apache-2.0 #region-us
|
Tigerbot 开源项目中微调中文sft-zh数据合集
本合集涵盖本组织下开源的其他中文sft-中文-数据集,不需要重复下载
Usage
-----
文件细分
----
|
[] |
[
"TAGS\n#language-Chinese #license-apache-2.0 #region-us \n"
] |
[
19
] |
[
"passage: TAGS\n#language-Chinese #license-apache-2.0 #region-us \n"
] |
331922adcccb10d3a100e6c3b1ae996b71f72fe9
|
# Dataset Card for "logits-kmt-it-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amitness/logits-kmt-it-512
|
[
"region:us"
] |
2023-09-08T13:09:43+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 42895572910.60717, "num_examples": 2721582}, {"name": "test", "num_bytes": 7569819964.089419, "num_examples": 480280}], "download_size": 18116725008, "dataset_size": 50465392874.69659}}
|
2023-09-08T15:35:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "logits-kmt-it-512"
More Information needed
|
[
"# Dataset Card for \"logits-kmt-it-512\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"logits-kmt-it-512\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"logits-kmt-it-512\"\n\nMore Information needed"
] |
510708118a1a84babf275c3d7a3e6dd522d58268
|
A 10k subset of OpenOrca dataset, focusing on multiple choice questions.
Credit to Tian Xia.
|
beaugogh/openorca-multiplechoice-10k
|
[
"license:apache-2.0",
"region:us"
] |
2023-09-08T13:10:24+00:00
|
{"license": "apache-2.0"}
|
2023-09-09T05:21:21+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
A 10k subset of OpenOrca dataset, focusing on multiple choice questions.
Credit to Tian Xia.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
60700ab9b6549cf77208bb9576526df0644f9703
|
# OCR Race Numbers Detection
The dataset consists of photos of runners, participating in various races. Each photo captures a runner wearing a race number on their attire.
The dataset provides **bounding boxes** annotations indicating the location of the race number in each photo and includes corresponding OCR annotations, where the digit sequences on the race numbers are transcribed.
This dataset combines the domains of sports, computer vision, and OCR technology, providing a valuable resource for advancing the field of race number detection and OCR in the context of athletic events.
.png?generation=1694175985579731&alt=media)
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=race-numbers-detection-and-ocr) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of original images of athletes
- **boxes** - includes bounding box labeling for the original images
- **annotations.xml** - contains coordinates of the bounding boxes and indicated text, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for text detection. For each point, the x and y coordinates are provided.
# Example of XML file structure
.png?generation=1694175850461006&alt=media)
# Race Numbers Detection might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=race-numbers-detection-and-ocr) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/race-numbers-detection-and-ocr
|
[
"task_categories:image-to-text",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"biology",
"region:us"
] |
2023-09-08T13:19:46+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-to-text", "object-detection"], "tags": ["code", "biology"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "shapes", "sequence": [{"name": "label", "dtype": {"class_label": {"names": {"0": "number"}}}}, {"name": "type", "dtype": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "float32"}, {"name": "attributes", "sequence": [{"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 106715580, "num_examples": 30}], "download_size": 105575371, "dataset_size": 106715580}}
|
2023-09-26T07:13:51+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-to-text #task_categories-object-detection #language-English #license-cc-by-nc-nd-4.0 #code #biology #region-us
|
# OCR Race Numbers Detection
The dataset consists of photos of runners, participating in various races. Each photo captures a runner wearing a race number on their attire.
The dataset provides bounding boxes annotations indicating the location of the race number in each photo and includes corresponding OCR annotations, where the digit sequences on the race numbers are transcribed.
This dataset combines the domains of sports, computer vision, and OCR technology, providing a valuable resource for advancing the field of race number detection and OCR in the context of athletic events.

|
amitness/logits-mt-ar-512
|
[
"region:us"
] |
2023-09-08T13:26:59+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 17102298098.701529, "num_examples": 940987}, {"name": "test", "num_bytes": 3018061158.5240602, "num_examples": 166057}], "download_size": 7348415360, "dataset_size": 20120359257.22559}}
|
2023-09-09T17:12:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "logits-mt-ar-512"
More Information needed
|
[
"# Dataset Card for \"logits-mt-ar-512\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"logits-mt-ar-512\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"logits-mt-ar-512\"\n\nMore Information needed"
] |
12cbbecab2a3eb6e33ea371668afdd4716a78868
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains 100 samples from [trivia_qa](https://huggingface.co/datasets/trivia_qa) dataset. It is used mainly for testing purposes.
### Languages
English.
## Dataset Structure
### Data Instances
Total data size: 8Kb.
### Data Fields
- `question`: string feature, containing question to be answered.
- `answer: string feature, answer to the question.
### Data Splits
Only `test` split, that contains 100 rows, is supported.
|
SpeedOfMagic/trivia_qa_tiny
|
[
"size_categories:n<1K",
"language:en",
"region:us"
] |
2023-09-08T13:32:44+00:00
|
{"language": ["en"], "size_categories": ["n<1K"]}
|
2023-09-08T15:39:19+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-n<1K #language-English #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset contains 100 samples from trivia_qa dataset. It is used mainly for testing purposes.
### Languages
English.
## Dataset Structure
### Data Instances
Total data size: 8Kb.
### Data Fields
- 'question': string feature, containing question to be answered.
- 'answer: string feature, answer to the question.
### Data Splits
Only 'test' split, that contains 100 rows, is supported.
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset contains 100 samples from trivia_qa dataset. It is used mainly for testing purposes.",
"### Languages\n\nEnglish.",
"## Dataset Structure",
"### Data Instances\n\nTotal data size: 8Kb.",
"### Data Fields\n\n- 'question': string feature, containing question to be answered. \n- 'answer: string feature, answer to the question.",
"### Data Splits\n\nOnly 'test' split, that contains 100 rows, is supported."
] |
[
"TAGS\n#size_categories-n<1K #language-English #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset contains 100 samples from trivia_qa dataset. It is used mainly for testing purposes.",
"### Languages\n\nEnglish.",
"## Dataset Structure",
"### Data Instances\n\nTotal data size: 8Kb.",
"### Data Fields\n\n- 'question': string feature, containing question to be answered. \n- 'answer: string feature, answer to the question.",
"### Data Splits\n\nOnly 'test' split, that contains 100 rows, is supported."
] |
[
20,
8,
24,
32,
6,
6,
14,
34,
23
] |
[
"passage: TAGS\n#size_categories-n<1K #language-English #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset contains 100 samples from trivia_qa dataset. It is used mainly for testing purposes.### Languages\n\nEnglish.## Dataset Structure### Data Instances\n\nTotal data size: 8Kb.### Data Fields\n\n- 'question': string feature, containing question to be answered. \n- 'answer: string feature, answer to the question.### Data Splits\n\nOnly 'test' split, that contains 100 rows, is supported."
] |
1e0be5987d57950c869719b242189d23fb1ba387
|
# Dataset Card for "MNIST"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AiresPucrs/MNIST-digit
|
[
"region:us"
] |
2023-09-08T13:45:14+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "1x1", "dtype": "int64"}, {"name": "1x2", "dtype": "int64"}, {"name": "1x3", "dtype": "int64"}, {"name": "1x4", "dtype": "int64"}, {"name": "1x5", "dtype": "int64"}, {"name": "1x6", "dtype": "int64"}, {"name": "1x7", "dtype": "int64"}, {"name": "1x8", "dtype": "int64"}, {"name": "1x9", "dtype": "int64"}, {"name": "1x10", "dtype": "int64"}, {"name": "1x11", "dtype": "int64"}, {"name": "1x12", "dtype": "int64"}, {"name": "1x13", "dtype": "int64"}, {"name": "1x14", "dtype": "int64"}, {"name": "1x15", "dtype": "int64"}, {"name": "1x16", "dtype": "int64"}, {"name": "1x17", "dtype": "int64"}, {"name": "1x18", "dtype": "int64"}, {"name": "1x19", "dtype": "int64"}, {"name": "1x20", "dtype": "int64"}, {"name": "1x21", "dtype": "int64"}, {"name": "1x22", "dtype": "int64"}, {"name": "1x23", "dtype": "int64"}, {"name": "1x24", "dtype": "int64"}, {"name": "1x25", "dtype": "int64"}, {"name": "1x26", "dtype": "int64"}, {"name": "1x27", "dtype": "int64"}, {"name": "1x28", "dtype": "int64"}, {"name": "2x1", "dtype": "int64"}, {"name": "2x2", "dtype": "int64"}, {"name": "2x3", "dtype": "int64"}, {"name": "2x4", "dtype": "int64"}, {"name": "2x5", "dtype": "int64"}, {"name": "2x6", "dtype": "int64"}, {"name": "2x7", "dtype": "int64"}, {"name": "2x8", "dtype": "int64"}, {"name": "2x9", "dtype": "int64"}, {"name": "2x10", "dtype": "int64"}, {"name": "2x11", "dtype": "int64"}, {"name": "2x12", "dtype": "int64"}, {"name": "2x13", "dtype": "int64"}, {"name": "2x14", "dtype": "int64"}, {"name": "2x15", "dtype": "int64"}, {"name": "2x16", "dtype": "int64"}, {"name": "2x17", "dtype": "int64"}, {"name": "2x18", "dtype": "int64"}, {"name": "2x19", "dtype": "int64"}, {"name": "2x20", "dtype": "int64"}, {"name": "2x21", "dtype": "int64"}, {"name": "2x22", "dtype": "int64"}, {"name": "2x23", "dtype": "int64"}, {"name": "2x24", "dtype": "int64"}, {"name": "2x25", "dtype": "int64"}, {"name": "2x26", "dtype": "int64"}, {"name": "2x27", "dtype": "int64"}, {"name": "2x28", "dtype": "int64"}, {"name": "3x1", "dtype": "int64"}, {"name": "3x2", "dtype": "int64"}, {"name": "3x3", "dtype": "int64"}, {"name": "3x4", "dtype": "int64"}, {"name": "3x5", "dtype": "int64"}, {"name": "3x6", "dtype": "int64"}, {"name": "3x7", "dtype": "int64"}, {"name": "3x8", "dtype": "int64"}, {"name": "3x9", "dtype": "int64"}, {"name": "3x10", "dtype": "int64"}, {"name": "3x11", "dtype": "int64"}, {"name": "3x12", "dtype": "int64"}, {"name": "3x13", "dtype": "int64"}, {"name": "3x14", "dtype": "int64"}, {"name": "3x15", "dtype": "int64"}, {"name": "3x16", "dtype": "int64"}, {"name": "3x17", "dtype": "int64"}, {"name": "3x18", "dtype": "int64"}, {"name": "3x19", "dtype": "int64"}, {"name": "3x20", "dtype": "int64"}, {"name": "3x21", "dtype": "int64"}, {"name": "3x22", "dtype": "int64"}, {"name": "3x23", "dtype": "int64"}, {"name": "3x24", "dtype": "int64"}, {"name": "3x25", "dtype": "int64"}, {"name": "3x26", "dtype": "int64"}, {"name": "3x27", "dtype": "int64"}, {"name": "3x28", "dtype": "int64"}, {"name": "4x1", "dtype": "int64"}, {"name": "4x2", "dtype": "int64"}, {"name": "4x3", "dtype": "int64"}, {"name": "4x4", "dtype": "int64"}, {"name": "4x5", "dtype": "int64"}, {"name": "4x6", "dtype": "int64"}, {"name": "4x7", "dtype": "int64"}, {"name": "4x8", "dtype": "int64"}, {"name": "4x9", "dtype": "int64"}, {"name": "4x10", "dtype": "int64"}, {"name": "4x11", "dtype": "int64"}, {"name": "4x12", "dtype": "int64"}, {"name": "4x13", "dtype": "int64"}, {"name": "4x14", "dtype": "int64"}, {"name": "4x15", "dtype": "int64"}, {"name": "4x16", "dtype": "int64"}, {"name": "4x17", "dtype": "int64"}, {"name": "4x18", "dtype": "int64"}, {"name": "4x19", "dtype": "int64"}, {"name": "4x20", "dtype": "int64"}, {"name": "4x21", "dtype": "int64"}, {"name": "4x22", "dtype": "int64"}, {"name": "4x23", "dtype": "int64"}, {"name": "4x24", "dtype": "int64"}, {"name": "4x25", "dtype": "int64"}, {"name": "4x26", "dtype": "int64"}, {"name": "4x27", "dtype": "int64"}, {"name": "4x28", "dtype": "int64"}, {"name": "5x1", "dtype": "int64"}, {"name": "5x2", "dtype": "int64"}, {"name": "5x3", "dtype": "int64"}, {"name": "5x4", "dtype": "int64"}, {"name": "5x5", "dtype": "int64"}, {"name": "5x6", "dtype": "int64"}, {"name": "5x7", "dtype": "int64"}, {"name": "5x8", "dtype": "int64"}, {"name": "5x9", "dtype": "int64"}, {"name": "5x10", "dtype": "int64"}, {"name": "5x11", "dtype": "int64"}, {"name": "5x12", "dtype": "int64"}, {"name": "5x13", "dtype": "int64"}, {"name": "5x14", "dtype": "int64"}, {"name": "5x15", "dtype": "int64"}, {"name": "5x16", "dtype": "int64"}, {"name": "5x17", "dtype": "int64"}, {"name": "5x18", "dtype": "int64"}, {"name": "5x19", "dtype": "int64"}, {"name": "5x20", "dtype": "int64"}, {"name": "5x21", "dtype": "int64"}, {"name": "5x22", "dtype": "int64"}, {"name": "5x23", "dtype": "int64"}, {"name": "5x24", "dtype": "int64"}, {"name": "5x25", "dtype": "int64"}, {"name": "5x26", "dtype": "int64"}, {"name": "5x27", "dtype": "int64"}, {"name": "5x28", "dtype": "int64"}, {"name": "6x1", "dtype": "int64"}, {"name": "6x2", "dtype": "int64"}, {"name": "6x3", "dtype": "int64"}, {"name": "6x4", "dtype": "int64"}, {"name": "6x5", "dtype": "int64"}, {"name": "6x6", "dtype": "int64"}, {"name": "6x7", "dtype": "int64"}, {"name": "6x8", "dtype": "int64"}, {"name": "6x9", "dtype": "int64"}, {"name": "6x10", "dtype": "int64"}, {"name": "6x11", "dtype": "int64"}, {"name": "6x12", "dtype": "int64"}, {"name": "6x13", "dtype": "int64"}, {"name": "6x14", "dtype": "int64"}, {"name": "6x15", "dtype": "int64"}, {"name": "6x16", "dtype": "int64"}, {"name": "6x17", "dtype": "int64"}, {"name": "6x18", "dtype": "int64"}, {"name": "6x19", "dtype": "int64"}, {"name": "6x20", "dtype": "int64"}, {"name": "6x21", "dtype": "int64"}, {"name": "6x22", "dtype": "int64"}, {"name": "6x23", "dtype": "int64"}, {"name": "6x24", "dtype": "int64"}, {"name": "6x25", "dtype": "int64"}, {"name": "6x26", "dtype": "int64"}, {"name": "6x27", "dtype": "int64"}, {"name": "6x28", "dtype": "int64"}, {"name": "7x1", "dtype": "int64"}, {"name": "7x2", "dtype": "int64"}, {"name": "7x3", "dtype": "int64"}, {"name": "7x4", "dtype": "int64"}, {"name": "7x5", "dtype": "int64"}, {"name": "7x6", "dtype": "int64"}, {"name": "7x7", "dtype": "int64"}, {"name": "7x8", "dtype": "int64"}, {"name": "7x9", "dtype": "int64"}, {"name": "7x10", "dtype": "int64"}, {"name": "7x11", "dtype": "int64"}, {"name": "7x12", "dtype": "int64"}, {"name": "7x13", "dtype": "int64"}, {"name": "7x14", "dtype": "int64"}, {"name": "7x15", "dtype": "int64"}, {"name": "7x16", "dtype": "int64"}, {"name": "7x17", "dtype": "int64"}, {"name": "7x18", "dtype": "int64"}, {"name": "7x19", "dtype": "int64"}, {"name": "7x20", "dtype": "int64"}, {"name": "7x21", "dtype": "int64"}, {"name": "7x22", "dtype": "int64"}, {"name": "7x23", "dtype": "int64"}, {"name": "7x24", "dtype": "int64"}, {"name": "7x25", "dtype": "int64"}, {"name": "7x26", "dtype": "int64"}, {"name": "7x27", "dtype": "int64"}, {"name": "7x28", "dtype": "int64"}, {"name": "8x1", "dtype": "int64"}, {"name": "8x2", "dtype": "int64"}, {"name": "8x3", "dtype": "int64"}, {"name": "8x4", "dtype": "int64"}, {"name": "8x5", "dtype": "int64"}, {"name": "8x6", "dtype": "int64"}, {"name": "8x7", "dtype": "int64"}, {"name": "8x8", "dtype": "int64"}, {"name": "8x9", "dtype": "int64"}, {"name": "8x10", "dtype": "int64"}, {"name": "8x11", "dtype": "int64"}, {"name": "8x12", "dtype": "int64"}, {"name": "8x13", "dtype": "int64"}, {"name": "8x14", "dtype": "int64"}, {"name": "8x15", "dtype": "int64"}, {"name": "8x16", "dtype": "int64"}, {"name": "8x17", "dtype": "int64"}, {"name": "8x18", "dtype": "int64"}, {"name": "8x19", "dtype": "int64"}, {"name": "8x20", "dtype": "int64"}, {"name": "8x21", "dtype": "int64"}, {"name": "8x22", "dtype": "int64"}, {"name": "8x23", "dtype": "int64"}, {"name": "8x24", "dtype": "int64"}, {"name": "8x25", "dtype": "int64"}, {"name": "8x26", "dtype": "int64"}, {"name": "8x27", "dtype": "int64"}, {"name": "8x28", "dtype": "int64"}, {"name": "9x1", "dtype": "int64"}, {"name": "9x2", "dtype": "int64"}, {"name": "9x3", "dtype": "int64"}, {"name": "9x4", "dtype": "int64"}, {"name": "9x5", "dtype": "int64"}, {"name": "9x6", "dtype": "int64"}, {"name": "9x7", "dtype": "int64"}, {"name": "9x8", "dtype": "int64"}, {"name": "9x9", "dtype": "int64"}, {"name": "9x10", "dtype": "int64"}, {"name": "9x11", "dtype": "int64"}, {"name": "9x12", "dtype": "int64"}, {"name": "9x13", "dtype": "int64"}, {"name": "9x14", "dtype": "int64"}, {"name": "9x15", "dtype": "int64"}, {"name": "9x16", "dtype": "int64"}, {"name": "9x17", "dtype": "int64"}, {"name": "9x18", "dtype": "int64"}, {"name": "9x19", "dtype": "int64"}, {"name": "9x20", "dtype": "int64"}, {"name": "9x21", "dtype": "int64"}, {"name": "9x22", "dtype": "int64"}, {"name": "9x23", "dtype": "int64"}, {"name": "9x24", "dtype": "int64"}, {"name": "9x25", "dtype": "int64"}, {"name": "9x26", "dtype": "int64"}, {"name": "9x27", "dtype": "int64"}, {"name": "9x28", "dtype": "int64"}, {"name": "10x1", "dtype": "int64"}, {"name": "10x2", "dtype": "int64"}, {"name": "10x3", "dtype": "int64"}, {"name": "10x4", "dtype": "int64"}, {"name": "10x5", "dtype": "int64"}, {"name": "10x6", "dtype": "int64"}, {"name": "10x7", "dtype": "int64"}, {"name": "10x8", "dtype": "int64"}, {"name": "10x9", "dtype": "int64"}, {"name": "10x10", "dtype": "int64"}, {"name": "10x11", "dtype": "int64"}, {"name": "10x12", "dtype": "int64"}, {"name": "10x13", "dtype": "int64"}, {"name": "10x14", "dtype": "int64"}, {"name": "10x15", "dtype": "int64"}, {"name": "10x16", "dtype": "int64"}, {"name": "10x17", "dtype": "int64"}, {"name": "10x18", "dtype": "int64"}, {"name": "10x19", "dtype": "int64"}, {"name": "10x20", "dtype": "int64"}, {"name": "10x21", "dtype": "int64"}, {"name": "10x22", "dtype": "int64"}, {"name": "10x23", "dtype": "int64"}, {"name": "10x24", "dtype": "int64"}, {"name": "10x25", "dtype": "int64"}, {"name": "10x26", "dtype": "int64"}, {"name": "10x27", "dtype": "int64"}, {"name": "10x28", "dtype": "int64"}, {"name": "11x1", "dtype": "int64"}, {"name": "11x2", "dtype": "int64"}, {"name": "11x3", "dtype": "int64"}, {"name": "11x4", "dtype": "int64"}, {"name": "11x5", "dtype": "int64"}, {"name": "11x6", "dtype": "int64"}, {"name": "11x7", "dtype": "int64"}, {"name": "11x8", "dtype": "int64"}, {"name": "11x9", "dtype": "int64"}, {"name": "11x10", "dtype": "int64"}, {"name": "11x11", "dtype": "int64"}, {"name": "11x12", "dtype": "int64"}, {"name": "11x13", "dtype": "int64"}, {"name": "11x14", "dtype": "int64"}, {"name": "11x15", "dtype": "int64"}, {"name": "11x16", "dtype": "int64"}, {"name": "11x17", "dtype": "int64"}, {"name": "11x18", "dtype": "int64"}, {"name": "11x19", "dtype": "int64"}, {"name": "11x20", "dtype": "int64"}, {"name": "11x21", "dtype": "int64"}, {"name": "11x22", "dtype": "int64"}, {"name": "11x23", "dtype": "int64"}, {"name": "11x24", "dtype": "int64"}, {"name": "11x25", "dtype": "int64"}, {"name": "11x26", "dtype": "int64"}, {"name": "11x27", "dtype": "int64"}, {"name": "11x28", "dtype": "int64"}, {"name": "12x1", "dtype": "int64"}, {"name": "12x2", "dtype": "int64"}, {"name": "12x3", "dtype": "int64"}, {"name": "12x4", "dtype": "int64"}, {"name": "12x5", "dtype": "int64"}, {"name": "12x6", "dtype": "int64"}, {"name": "12x7", "dtype": "int64"}, {"name": "12x8", "dtype": "int64"}, {"name": "12x9", "dtype": "int64"}, {"name": "12x10", "dtype": "int64"}, {"name": "12x11", "dtype": "int64"}, {"name": "12x12", "dtype": "int64"}, {"name": "12x13", "dtype": "int64"}, {"name": "12x14", "dtype": "int64"}, {"name": "12x15", "dtype": "int64"}, {"name": "12x16", "dtype": "int64"}, {"name": "12x17", "dtype": "int64"}, {"name": "12x18", "dtype": "int64"}, {"name": "12x19", "dtype": "int64"}, {"name": "12x20", "dtype": "int64"}, {"name": "12x21", "dtype": "int64"}, {"name": "12x22", "dtype": "int64"}, {"name": "12x23", "dtype": "int64"}, {"name": "12x24", "dtype": "int64"}, {"name": "12x25", "dtype": "int64"}, {"name": "12x26", "dtype": "int64"}, {"name": "12x27", "dtype": "int64"}, {"name": "12x28", "dtype": "int64"}, {"name": "13x1", "dtype": "int64"}, {"name": "13x2", "dtype": "int64"}, {"name": "13x3", "dtype": "int64"}, {"name": "13x4", "dtype": "int64"}, {"name": "13x5", "dtype": "int64"}, {"name": "13x6", "dtype": "int64"}, {"name": "13x7", "dtype": "int64"}, {"name": "13x8", "dtype": "int64"}, {"name": "13x9", "dtype": "int64"}, {"name": "13x10", "dtype": "int64"}, {"name": "13x11", "dtype": "int64"}, {"name": "13x12", "dtype": "int64"}, {"name": "13x13", "dtype": "int64"}, {"name": "13x14", "dtype": "int64"}, {"name": "13x15", "dtype": "int64"}, {"name": "13x16", "dtype": "int64"}, {"name": "13x17", "dtype": "int64"}, {"name": "13x18", "dtype": "int64"}, {"name": "13x19", "dtype": "int64"}, {"name": "13x20", "dtype": "int64"}, {"name": "13x21", "dtype": "int64"}, {"name": "13x22", "dtype": "int64"}, {"name": "13x23", "dtype": "int64"}, {"name": "13x24", "dtype": "int64"}, {"name": "13x25", "dtype": "int64"}, {"name": "13x26", "dtype": "int64"}, {"name": "13x27", "dtype": "int64"}, {"name": "13x28", "dtype": "int64"}, {"name": "14x1", "dtype": "int64"}, {"name": "14x2", "dtype": "int64"}, {"name": "14x3", "dtype": "int64"}, {"name": "14x4", "dtype": "int64"}, {"name": "14x5", "dtype": "int64"}, {"name": "14x6", "dtype": "int64"}, {"name": "14x7", "dtype": "int64"}, {"name": "14x8", "dtype": "int64"}, {"name": "14x9", "dtype": "int64"}, {"name": "14x10", "dtype": "int64"}, {"name": "14x11", "dtype": "int64"}, {"name": "14x12", "dtype": "int64"}, {"name": "14x13", "dtype": "int64"}, {"name": "14x14", "dtype": "int64"}, {"name": "14x15", "dtype": "int64"}, {"name": "14x16", "dtype": "int64"}, {"name": "14x17", "dtype": "int64"}, {"name": "14x18", "dtype": "int64"}, {"name": "14x19", "dtype": "int64"}, {"name": "14x20", "dtype": "int64"}, {"name": "14x21", "dtype": "int64"}, {"name": "14x22", "dtype": "int64"}, {"name": "14x23", "dtype": "int64"}, {"name": "14x24", "dtype": "int64"}, {"name": "14x25", "dtype": "int64"}, {"name": "14x26", "dtype": "int64"}, {"name": "14x27", "dtype": "int64"}, {"name": "14x28", "dtype": "int64"}, {"name": "15x1", "dtype": "int64"}, {"name": "15x2", "dtype": "int64"}, {"name": "15x3", "dtype": "int64"}, {"name": "15x4", "dtype": "int64"}, {"name": "15x5", "dtype": "int64"}, {"name": "15x6", "dtype": "int64"}, {"name": "15x7", "dtype": "int64"}, {"name": "15x8", "dtype": "int64"}, {"name": "15x9", "dtype": "int64"}, {"name": "15x10", "dtype": "int64"}, {"name": "15x11", "dtype": "int64"}, {"name": "15x12", "dtype": "int64"}, {"name": "15x13", "dtype": "int64"}, {"name": "15x14", "dtype": "int64"}, {"name": "15x15", "dtype": "int64"}, {"name": "15x16", "dtype": "int64"}, {"name": "15x17", "dtype": "int64"}, {"name": "15x18", "dtype": "int64"}, {"name": "15x19", "dtype": "int64"}, {"name": "15x20", "dtype": "int64"}, {"name": "15x21", "dtype": "int64"}, {"name": "15x22", "dtype": "int64"}, {"name": "15x23", "dtype": "int64"}, {"name": "15x24", "dtype": "int64"}, {"name": "15x25", "dtype": "int64"}, {"name": "15x26", "dtype": "int64"}, {"name": "15x27", "dtype": "int64"}, {"name": "15x28", "dtype": "int64"}, {"name": "16x1", "dtype": "int64"}, {"name": "16x2", "dtype": "int64"}, {"name": "16x3", "dtype": "int64"}, {"name": "16x4", "dtype": "int64"}, {"name": "16x5", "dtype": "int64"}, {"name": "16x6", "dtype": "int64"}, {"name": "16x7", "dtype": "int64"}, {"name": "16x8", "dtype": "int64"}, {"name": "16x9", "dtype": "int64"}, {"name": "16x10", "dtype": "int64"}, {"name": "16x11", "dtype": "int64"}, {"name": "16x12", "dtype": "int64"}, {"name": "16x13", "dtype": "int64"}, {"name": "16x14", "dtype": "int64"}, {"name": "16x15", "dtype": "int64"}, {"name": "16x16", "dtype": "int64"}, {"name": "16x17", "dtype": "int64"}, {"name": "16x18", "dtype": "int64"}, {"name": "16x19", "dtype": "int64"}, {"name": "16x20", "dtype": "int64"}, {"name": "16x21", "dtype": "int64"}, {"name": "16x22", "dtype": "int64"}, {"name": "16x23", "dtype": "int64"}, {"name": "16x24", "dtype": "int64"}, {"name": "16x25", "dtype": "int64"}, {"name": "16x26", "dtype": "int64"}, {"name": "16x27", "dtype": "int64"}, {"name": "16x28", "dtype": "int64"}, {"name": "17x1", "dtype": "int64"}, {"name": "17x2", "dtype": "int64"}, {"name": "17x3", "dtype": "int64"}, {"name": "17x4", "dtype": "int64"}, {"name": "17x5", "dtype": "int64"}, {"name": "17x6", "dtype": "int64"}, {"name": "17x7", "dtype": "int64"}, {"name": "17x8", "dtype": "int64"}, {"name": "17x9", "dtype": "int64"}, {"name": "17x10", "dtype": "int64"}, {"name": "17x11", "dtype": "int64"}, {"name": "17x12", "dtype": "int64"}, {"name": "17x13", "dtype": "int64"}, {"name": "17x14", "dtype": "int64"}, {"name": "17x15", "dtype": "int64"}, {"name": "17x16", "dtype": "int64"}, {"name": "17x17", "dtype": "int64"}, {"name": "17x18", "dtype": "int64"}, {"name": "17x19", "dtype": "int64"}, {"name": "17x20", "dtype": "int64"}, {"name": "17x21", "dtype": "int64"}, {"name": "17x22", "dtype": "int64"}, {"name": "17x23", "dtype": "int64"}, {"name": "17x24", "dtype": "int64"}, {"name": "17x25", "dtype": "int64"}, {"name": "17x26", "dtype": "int64"}, {"name": "17x27", "dtype": "int64"}, {"name": "17x28", "dtype": "int64"}, {"name": "18x1", "dtype": "int64"}, {"name": "18x2", "dtype": "int64"}, {"name": "18x3", "dtype": "int64"}, {"name": "18x4", "dtype": "int64"}, {"name": "18x5", "dtype": "int64"}, {"name": "18x6", "dtype": "int64"}, {"name": "18x7", "dtype": "int64"}, {"name": "18x8", "dtype": "int64"}, {"name": "18x9", "dtype": "int64"}, {"name": "18x10", "dtype": "int64"}, {"name": "18x11", "dtype": "int64"}, {"name": "18x12", "dtype": "int64"}, {"name": "18x13", "dtype": "int64"}, {"name": "18x14", "dtype": "int64"}, {"name": "18x15", "dtype": "int64"}, {"name": "18x16", "dtype": "int64"}, {"name": "18x17", "dtype": "int64"}, {"name": "18x18", "dtype": "int64"}, {"name": "18x19", "dtype": "int64"}, {"name": "18x20", "dtype": "int64"}, {"name": "18x21", "dtype": "int64"}, {"name": "18x22", "dtype": "int64"}, {"name": "18x23", "dtype": "int64"}, {"name": "18x24", "dtype": "int64"}, {"name": "18x25", "dtype": "int64"}, {"name": "18x26", "dtype": "int64"}, {"name": "18x27", "dtype": "int64"}, {"name": "18x28", "dtype": "int64"}, {"name": "19x1", "dtype": "int64"}, {"name": "19x2", "dtype": "int64"}, {"name": "19x3", "dtype": "int64"}, {"name": "19x4", "dtype": "int64"}, {"name": "19x5", "dtype": "int64"}, {"name": "19x6", "dtype": "int64"}, {"name": "19x7", "dtype": "int64"}, {"name": "19x8", "dtype": "int64"}, {"name": "19x9", "dtype": "int64"}, {"name": "19x10", "dtype": "int64"}, {"name": "19x11", "dtype": "int64"}, {"name": "19x12", "dtype": "int64"}, {"name": "19x13", "dtype": "int64"}, {"name": "19x14", "dtype": "int64"}, {"name": "19x15", "dtype": "int64"}, {"name": "19x16", "dtype": "int64"}, {"name": "19x17", "dtype": "int64"}, {"name": "19x18", "dtype": "int64"}, {"name": "19x19", "dtype": "int64"}, {"name": "19x20", "dtype": "int64"}, {"name": "19x21", "dtype": "int64"}, {"name": "19x22", "dtype": "int64"}, {"name": "19x23", "dtype": "int64"}, {"name": "19x24", "dtype": "int64"}, {"name": "19x25", "dtype": "int64"}, {"name": "19x26", "dtype": "int64"}, {"name": "19x27", "dtype": "int64"}, {"name": "19x28", "dtype": "int64"}, {"name": "20x1", "dtype": "int64"}, {"name": "20x2", "dtype": "int64"}, {"name": "20x3", "dtype": "int64"}, {"name": "20x4", "dtype": "int64"}, {"name": "20x5", "dtype": "int64"}, {"name": "20x6", "dtype": "int64"}, {"name": "20x7", "dtype": "int64"}, {"name": "20x8", "dtype": "int64"}, {"name": "20x9", "dtype": "int64"}, {"name": "20x10", "dtype": "int64"}, {"name": "20x11", "dtype": "int64"}, {"name": "20x12", "dtype": "int64"}, {"name": "20x13", "dtype": "int64"}, {"name": "20x14", "dtype": "int64"}, {"name": "20x15", "dtype": "int64"}, {"name": "20x16", "dtype": "int64"}, {"name": "20x17", "dtype": "int64"}, {"name": "20x18", "dtype": "int64"}, {"name": "20x19", "dtype": "int64"}, {"name": "20x20", "dtype": "int64"}, {"name": "20x21", "dtype": "int64"}, {"name": "20x22", "dtype": "int64"}, {"name": "20x23", "dtype": "int64"}, {"name": "20x24", "dtype": "int64"}, {"name": "20x25", "dtype": "int64"}, {"name": "20x26", "dtype": "int64"}, {"name": "20x27", "dtype": "int64"}, {"name": "20x28", "dtype": "int64"}, {"name": "21x1", "dtype": "int64"}, {"name": "21x2", "dtype": "int64"}, {"name": "21x3", "dtype": "int64"}, {"name": "21x4", "dtype": "int64"}, {"name": "21x5", "dtype": "int64"}, {"name": "21x6", "dtype": "int64"}, {"name": "21x7", "dtype": "int64"}, {"name": "21x8", "dtype": "int64"}, {"name": "21x9", "dtype": "int64"}, {"name": "21x10", "dtype": "int64"}, {"name": "21x11", "dtype": "int64"}, {"name": "21x12", "dtype": "int64"}, {"name": "21x13", "dtype": "int64"}, {"name": "21x14", "dtype": "int64"}, {"name": "21x15", "dtype": "int64"}, {"name": "21x16", "dtype": "int64"}, {"name": "21x17", "dtype": "int64"}, {"name": "21x18", "dtype": "int64"}, {"name": "21x19", "dtype": "int64"}, {"name": "21x20", "dtype": "int64"}, {"name": "21x21", "dtype": "int64"}, {"name": "21x22", "dtype": "int64"}, {"name": "21x23", "dtype": "int64"}, {"name": "21x24", "dtype": "int64"}, {"name": "21x25", "dtype": "int64"}, {"name": "21x26", "dtype": "int64"}, {"name": "21x27", "dtype": "int64"}, {"name": "21x28", "dtype": "int64"}, {"name": "22x1", "dtype": "int64"}, {"name": "22x2", "dtype": "int64"}, {"name": "22x3", "dtype": "int64"}, {"name": "22x4", "dtype": "int64"}, {"name": "22x5", "dtype": "int64"}, {"name": "22x6", "dtype": "int64"}, {"name": "22x7", "dtype": "int64"}, {"name": "22x8", "dtype": "int64"}, {"name": "22x9", "dtype": "int64"}, {"name": "22x10", "dtype": "int64"}, {"name": "22x11", "dtype": "int64"}, {"name": "22x12", "dtype": "int64"}, {"name": "22x13", "dtype": "int64"}, {"name": "22x14", "dtype": "int64"}, {"name": "22x15", "dtype": "int64"}, {"name": "22x16", "dtype": "int64"}, {"name": "22x17", "dtype": "int64"}, {"name": "22x18", "dtype": "int64"}, {"name": "22x19", "dtype": "int64"}, {"name": "22x20", "dtype": "int64"}, {"name": "22x21", "dtype": "int64"}, {"name": "22x22", "dtype": "int64"}, {"name": "22x23", "dtype": "int64"}, {"name": "22x24", "dtype": "int64"}, {"name": "22x25", "dtype": "int64"}, {"name": "22x26", "dtype": "int64"}, {"name": "22x27", "dtype": "int64"}, {"name": "22x28", "dtype": "int64"}, {"name": "23x1", "dtype": "int64"}, {"name": "23x2", "dtype": "int64"}, {"name": "23x3", "dtype": "int64"}, {"name": "23x4", "dtype": "int64"}, {"name": "23x5", "dtype": "int64"}, {"name": "23x6", "dtype": "int64"}, {"name": "23x7", "dtype": "int64"}, {"name": "23x8", "dtype": "int64"}, {"name": "23x9", "dtype": "int64"}, {"name": "23x10", "dtype": "int64"}, {"name": "23x11", "dtype": "int64"}, {"name": "23x12", "dtype": "int64"}, {"name": "23x13", "dtype": "int64"}, {"name": "23x14", "dtype": "int64"}, {"name": "23x15", "dtype": "int64"}, {"name": "23x16", "dtype": "int64"}, {"name": "23x17", "dtype": "int64"}, {"name": "23x18", "dtype": "int64"}, {"name": "23x19", "dtype": "int64"}, {"name": "23x20", "dtype": "int64"}, {"name": "23x21", "dtype": "int64"}, {"name": "23x22", "dtype": "int64"}, {"name": "23x23", "dtype": "int64"}, {"name": "23x24", "dtype": "int64"}, {"name": "23x25", "dtype": "int64"}, {"name": "23x26", "dtype": "int64"}, {"name": "23x27", "dtype": "int64"}, {"name": "23x28", "dtype": "int64"}, {"name": "24x1", "dtype": "int64"}, {"name": "24x2", "dtype": "int64"}, {"name": "24x3", "dtype": "int64"}, {"name": "24x4", "dtype": "int64"}, {"name": "24x5", "dtype": "int64"}, {"name": "24x6", "dtype": "int64"}, {"name": "24x7", "dtype": "int64"}, {"name": "24x8", "dtype": "int64"}, {"name": "24x9", "dtype": "int64"}, {"name": "24x10", "dtype": "int64"}, {"name": "24x11", "dtype": "int64"}, {"name": "24x12", "dtype": "int64"}, {"name": "24x13", "dtype": "int64"}, {"name": "24x14", "dtype": "int64"}, {"name": "24x15", "dtype": "int64"}, {"name": "24x16", "dtype": "int64"}, {"name": "24x17", "dtype": "int64"}, {"name": "24x18", "dtype": "int64"}, {"name": "24x19", "dtype": "int64"}, {"name": "24x20", "dtype": "int64"}, {"name": "24x21", "dtype": "int64"}, {"name": "24x22", "dtype": "int64"}, {"name": "24x23", "dtype": "int64"}, {"name": "24x24", "dtype": "int64"}, {"name": "24x25", "dtype": "int64"}, {"name": "24x26", "dtype": "int64"}, {"name": "24x27", "dtype": "int64"}, {"name": "24x28", "dtype": "int64"}, {"name": "25x1", "dtype": "int64"}, {"name": "25x2", "dtype": "int64"}, {"name": "25x3", "dtype": "int64"}, {"name": "25x4", "dtype": "int64"}, {"name": "25x5", "dtype": "int64"}, {"name": "25x6", "dtype": "int64"}, {"name": "25x7", "dtype": "int64"}, {"name": "25x8", "dtype": "int64"}, {"name": "25x9", "dtype": "int64"}, {"name": "25x10", "dtype": "int64"}, {"name": "25x11", "dtype": "int64"}, {"name": "25x12", "dtype": "int64"}, {"name": "25x13", "dtype": "int64"}, {"name": "25x14", "dtype": "int64"}, {"name": "25x15", "dtype": "int64"}, {"name": "25x16", "dtype": "int64"}, {"name": "25x17", "dtype": "int64"}, {"name": "25x18", "dtype": "int64"}, {"name": "25x19", "dtype": "int64"}, {"name": "25x20", "dtype": "int64"}, {"name": "25x21", "dtype": "int64"}, {"name": "25x22", "dtype": "int64"}, {"name": "25x23", "dtype": "int64"}, {"name": "25x24", "dtype": "int64"}, {"name": "25x25", "dtype": "int64"}, {"name": "25x26", "dtype": "int64"}, {"name": "25x27", "dtype": "int64"}, {"name": "25x28", "dtype": "int64"}, {"name": "26x1", "dtype": "int64"}, {"name": "26x2", "dtype": "int64"}, {"name": "26x3", "dtype": "int64"}, {"name": "26x4", "dtype": "int64"}, {"name": "26x5", "dtype": "int64"}, {"name": "26x6", "dtype": "int64"}, {"name": "26x7", "dtype": "int64"}, {"name": "26x8", "dtype": "int64"}, {"name": "26x9", "dtype": "int64"}, {"name": "26x10", "dtype": "int64"}, {"name": "26x11", "dtype": "int64"}, {"name": "26x12", "dtype": "int64"}, {"name": "26x13", "dtype": "int64"}, {"name": "26x14", "dtype": "int64"}, {"name": "26x15", "dtype": "int64"}, {"name": "26x16", "dtype": "int64"}, {"name": "26x17", "dtype": "int64"}, {"name": "26x18", "dtype": "int64"}, {"name": "26x19", "dtype": "int64"}, {"name": "26x20", "dtype": "int64"}, {"name": "26x21", "dtype": "int64"}, {"name": "26x22", "dtype": "int64"}, {"name": "26x23", "dtype": "int64"}, {"name": "26x24", "dtype": "int64"}, {"name": "26x25", "dtype": "int64"}, {"name": "26x26", "dtype": "int64"}, {"name": "26x27", "dtype": "int64"}, {"name": "26x28", "dtype": "int64"}, {"name": "27x1", "dtype": "int64"}, {"name": "27x2", "dtype": "int64"}, {"name": "27x3", "dtype": "int64"}, {"name": "27x4", "dtype": "int64"}, {"name": "27x5", "dtype": "int64"}, {"name": "27x6", "dtype": "int64"}, {"name": "27x7", "dtype": "int64"}, {"name": "27x8", "dtype": "int64"}, {"name": "27x9", "dtype": "int64"}, {"name": "27x10", "dtype": "int64"}, {"name": "27x11", "dtype": "int64"}, {"name": "27x12", "dtype": "int64"}, {"name": "27x13", "dtype": "int64"}, {"name": "27x14", "dtype": "int64"}, {"name": "27x15", "dtype": "int64"}, {"name": "27x16", "dtype": "int64"}, {"name": "27x17", "dtype": "int64"}, {"name": "27x18", "dtype": "int64"}, {"name": "27x19", "dtype": "int64"}, {"name": "27x20", "dtype": "int64"}, {"name": "27x21", "dtype": "int64"}, {"name": "27x22", "dtype": "int64"}, {"name": "27x23", "dtype": "int64"}, {"name": "27x24", "dtype": "int64"}, {"name": "27x25", "dtype": "int64"}, {"name": "27x26", "dtype": "int64"}, {"name": "27x27", "dtype": "int64"}, {"name": "27x28", "dtype": "int64"}, {"name": "28x1", "dtype": "int64"}, {"name": "28x2", "dtype": "int64"}, {"name": "28x3", "dtype": "int64"}, {"name": "28x4", "dtype": "int64"}, {"name": "28x5", "dtype": "int64"}, {"name": "28x6", "dtype": "int64"}, {"name": "28x7", "dtype": "int64"}, {"name": "28x8", "dtype": "int64"}, {"name": "28x9", "dtype": "int64"}, {"name": "28x10", "dtype": "int64"}, {"name": "28x11", "dtype": "int64"}, {"name": "28x12", "dtype": "int64"}, {"name": "28x13", "dtype": "int64"}, {"name": "28x14", "dtype": "int64"}, {"name": "28x15", "dtype": "int64"}, {"name": "28x16", "dtype": "int64"}, {"name": "28x17", "dtype": "int64"}, {"name": "28x18", "dtype": "int64"}, {"name": "28x19", "dtype": "int64"}, {"name": "28x20", "dtype": "int64"}, {"name": "28x21", "dtype": "int64"}, {"name": "28x22", "dtype": "int64"}, {"name": "28x23", "dtype": "int64"}, {"name": "28x24", "dtype": "int64"}, {"name": "28x25", "dtype": "int64"}, {"name": "28x26", "dtype": "int64"}, {"name": "28x27", "dtype": "int64"}, {"name": "28x28", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 376800000, "num_examples": 60000}, {"name": "test", "num_bytes": 62800000, "num_examples": 10000}], "download_size": 55424916, "dataset_size": 439600000}}
|
2023-09-08T13:45:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "MNIST"
More Information needed
|
[
"# Dataset Card for \"MNIST\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"MNIST\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"MNIST\"\n\nMore Information needed"
] |
0c10f3f0f15cbe166eafd9c171827dc4e2bf613b
|
# Dataset Card for "augmented_bangla_money10k_80_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fia24/augmented_bangla_money10k_80_20
|
[
"region:us"
] |
2023-09-08T13:50:43+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "10", "2": "100", "3": "1000", "4": "2", "5": "20", "6": "200", "7": "5", "8": "50", "9": "500"}}}}], "splits": [{"name": "train", "num_bytes": 75522474.4, "num_examples": 8000}, {"name": "test", "num_bytes": 18872730.6, "num_examples": 2000}], "download_size": 88787135, "dataset_size": 94395205.0}}
|
2023-09-08T13:50:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "augmented_bangla_money10k_80_20"
More Information needed
|
[
"# Dataset Card for \"augmented_bangla_money10k_80_20\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"augmented_bangla_money10k_80_20\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"augmented_bangla_money10k_80_20\"\n\nMore Information needed"
] |
1b0e0238f753abe22a1ded78931e3a8d14014969
|
# Dataset Card for "wizard_alpaca_dolly_orca_uncensored_masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jtatman/wizard_alpaca_dolly_orca_uncensored_masked
|
[
"region:us"
] |
2023-09-08T13:51:06+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "masked_instruction", "dtype": "string"}, {"name": "masked_input", "dtype": "string"}, {"name": "masked_system", "dtype": "string"}, {"name": "masked_output", "dtype": "string"}, {"name": "filled_instruction", "list": [{"name": "score", "dtype": "float64"}, {"name": "sequence", "dtype": "string"}, {"name": "token", "dtype": "int64"}, {"name": "token_str", "dtype": "string"}]}, {"name": "filled_input", "list": [{"name": "score", "dtype": "float64"}, {"name": "sequence", "dtype": "string"}, {"name": "token", "dtype": "int64"}, {"name": "token_str", "dtype": "string"}]}, {"name": "filled_system", "list": [{"name": "score", "dtype": "float64"}, {"name": "sequence", "dtype": "string"}, {"name": "token", "dtype": "int64"}, {"name": "token_str", "dtype": "string"}]}, {"name": "filled_output", "list": [{"name": "score", "dtype": "float64"}, {"name": "sequence", "dtype": "string"}, {"name": "token", "dtype": "int64"}, {"name": "token_str", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 739400762, "num_examples": 104179}], "download_size": 217086597, "dataset_size": 739400762}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T13:51:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wizard_alpaca_dolly_orca_uncensored_masked"
More Information needed
|
[
"# Dataset Card for \"wizard_alpaca_dolly_orca_uncensored_masked\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wizard_alpaca_dolly_orca_uncensored_masked\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wizard_alpaca_dolly_orca_uncensored_masked\"\n\nMore Information needed"
] |
1895931ce53425d46db1ffeb055d74ca1d912f17
|
# Dataset Card for Calc-SVAMP
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <https://github.com/arkilpatel/SVAMP/>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction process
We created the dataset by converting the **equation** attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.
We also perform in-dataset and cross-dataset data-leak detection within the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
However, for SVAMP specifically, we detected no data leaks and filtered no data.
## Content and data splits
The dataset contains the same data instances as the original dataset except for a correction of inconsistency between `equation` and `answer` in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark.
## Attributes:
- **id**: problem id from the original dataset
- **question**: the question intended to answer
- **chain**: series of simple operations (derived from `equation`) that leads to the solution
- **result**: the result (number) as a string
- **result_float**: result converted to a floating point
- **equation**: a nested expression that evaluates to the correct result
- **problem_type**: a category of the problem
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original SVAMP dataset and repo**](https://github.com/arkilpatel/SVAMP/)
- [**original SVAMP paper**](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35)
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of dataset in research, please cite the original [SVAMP paper](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
|
MU-NLPC/Calc-svamp
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"math world problems",
"math",
"arithmetics",
"arxiv:2305.15017",
"region:us"
] |
2023-09-08T13:56:46+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "tags": ["math world problems", "math", "arithmetics"], "dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "chain", "dtype": "string"}, {"name": "result", "dtype": "string"}, {"name": "result_float", "dtype": "float64"}, {"name": "equation", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 335744, "num_examples": 1000}], "download_size": 116449, "dataset_size": 335744}, {"config_name": "original-splits", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "chain", "dtype": "string"}, {"name": "result", "dtype": "string"}, {"name": "result_float", "dtype": "float64"}, {"name": "equation", "dtype": "string"}, {"name": "problem_type", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 335744, "num_examples": 1000}], "download_size": 116449, "dataset_size": 335744}], "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}, {"config_name": "original-splits", "data_files": [{"split": "test", "path": "original-splits/test-*"}]}]}
|
2023-10-30T15:05:26+00:00
|
[
"2305.15017"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #license-mit #math world problems #math #arithmetics #arxiv-2305.15017 #region-us
|
# Dataset Card for Calc-SVAMP
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL
The main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction process
We created the dataset by converting the equation attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.
We also perform in-dataset and cross-dataset data-leak detection within the Calc-X collection.
However, for SVAMP specifically, we detected no data leaks and filtered no data.
## Content and data splits
The dataset contains the same data instances as the original dataset except for a correction of inconsistency between 'equation' and 'answer' in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark.
## Attributes:
- id: problem id from the original dataset
- question: the question intended to answer
- chain: series of simple operations (derived from 'equation') that leads to the solution
- result: the result (number) as a string
- result_float: result converted to a floating point
- equation: a nested expression that evaluates to the correct result
- problem_type: a category of the problem
Attributes id, question, chain, and result are present in all datasets in Calc-X collection.
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- Calc-X collection - datasets for training Calcformers
- Calcformers collection - calculator-using models we trained and published on HF
- Calc-X and Calcformers paper
- Calc-X and Calcformers repo
Here are links to the original dataset:
- original SVAMP dataset and repo
- original SVAMP paper
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of dataset in research, please cite the original SVAMP paper, and Calc-X collection as follows:
|
[
"# Dataset Card for Calc-SVAMP",
"## Summary\n\nThe dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL\n\nThe main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily\nparsed (e.g. by BeautifulSoup). The data contains 3 types of tags:\n\n- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)\n- output: An output of the external tool\n- result: The final answer to the mathematical problem (a number)",
"## Supported Tasks\n\nThis variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.\nThis dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.",
"## Construction process\n\nWe created the dataset by converting the equation attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.\nWe also perform in-dataset and cross-dataset data-leak detection within the Calc-X collection.\nHowever, for SVAMP specifically, we detected no data leaks and filtered no data.",
"## Content and data splits\n\nThe dataset contains the same data instances as the original dataset except for a correction of inconsistency between 'equation' and 'answer' in one data instance.\nTo the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark.",
"## Attributes:\n\n- id: problem id from the original dataset\n- question: the question intended to answer\n- chain: series of simple operations (derived from 'equation') that leads to the solution\n- result: the result (number) as a string\n- result_float: result converted to a floating point\n- equation: a nested expression that evaluates to the correct result\n- problem_type: a category of the problem\n\nAttributes id, question, chain, and result are present in all datasets in Calc-X collection.",
"## Related work\n\nThis dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.\n\n- Calc-X collection - datasets for training Calcformers\n- Calcformers collection - calculator-using models we trained and published on HF\n- Calc-X and Calcformers paper\n- Calc-X and Calcformers repo\n\nHere are links to the original dataset:\n\n- original SVAMP dataset and repo\n- original SVAMP paper",
"## Licence\n\nMIT, consistent with the original source dataset linked above.",
"## Cite\n\nIf you use this version of dataset in research, please cite the original SVAMP paper, and Calc-X collection as follows:"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-mit #math world problems #math #arithmetics #arxiv-2305.15017 #region-us \n",
"# Dataset Card for Calc-SVAMP",
"## Summary\n\nThe dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL\n\nThe main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily\nparsed (e.g. by BeautifulSoup). The data contains 3 types of tags:\n\n- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)\n- output: An output of the external tool\n- result: The final answer to the mathematical problem (a number)",
"## Supported Tasks\n\nThis variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.\nThis dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.",
"## Construction process\n\nWe created the dataset by converting the equation attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.\nWe also perform in-dataset and cross-dataset data-leak detection within the Calc-X collection.\nHowever, for SVAMP specifically, we detected no data leaks and filtered no data.",
"## Content and data splits\n\nThe dataset contains the same data instances as the original dataset except for a correction of inconsistency between 'equation' and 'answer' in one data instance.\nTo the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark.",
"## Attributes:\n\n- id: problem id from the original dataset\n- question: the question intended to answer\n- chain: series of simple operations (derived from 'equation') that leads to the solution\n- result: the result (number) as a string\n- result_float: result converted to a floating point\n- equation: a nested expression that evaluates to the correct result\n- problem_type: a category of the problem\n\nAttributes id, question, chain, and result are present in all datasets in Calc-X collection.",
"## Related work\n\nThis dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.\n\n- Calc-X collection - datasets for training Calcformers\n- Calcformers collection - calculator-using models we trained and published on HF\n- Calc-X and Calcformers paper\n- Calc-X and Calcformers repo\n\nHere are links to the original dataset:\n\n- original SVAMP dataset and repo\n- original SVAMP paper",
"## Licence\n\nMIT, consistent with the original source dataset linked above.",
"## Cite\n\nIf you use this version of dataset in research, please cite the original SVAMP paper, and Calc-X collection as follows:"
] |
[
56,
10,
140,
70,
91,
79,
125,
115,
16,
32
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-mit #math world problems #math #arithmetics #arxiv-2305.15017 #region-us \n# Dataset Card for Calc-SVAMP## Summary\n\nThe dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL\n\nThe main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily\nparsed (e.g. by BeautifulSoup). The data contains 3 types of tags:\n\n- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)\n- output: An output of the external tool\n- result: The final answer to the mathematical problem (a number)## Supported Tasks\n\nThis variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.\nThis dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.## Construction process\n\nWe created the dataset by converting the equation attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.\nWe also perform in-dataset and cross-dataset data-leak detection within the Calc-X collection.\nHowever, for SVAMP specifically, we detected no data leaks and filtered no data.## Content and data splits\n\nThe dataset contains the same data instances as the original dataset except for a correction of inconsistency between 'equation' and 'answer' in one data instance.\nTo the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark."
] |
f6ef10665dc18dd314d7144d95506c514a0f58e9
|
# Dataset Card for "large_ft_spkn55"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/large_ft_spkn55
|
[
"region:us"
] |
2023-09-08T13:58:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "pred_str", "dtype": "string"}, {"name": "w2v2_baseline_norm", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 208306, "num_examples": 1299}], "download_size": 108976, "dataset_size": 208306}}
|
2023-09-08T13:58:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "large_ft_spkn55"
More Information needed
|
[
"# Dataset Card for \"large_ft_spkn55\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"large_ft_spkn55\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"large_ft_spkn55\"\n\nMore Information needed"
] |
60bdd2bebd9857eb4e445b76116df52fc05317ce
|
# Dataset Card for "manga_art_style_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/manga_art_style_prompts
|
[
"region:us"
] |
2023-09-08T14:21:42+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119589, "num_examples": 1000}], "download_size": 15846, "dataset_size": 119589}}
|
2023-09-08T14:21:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "manga_art_style_prompts"
More Information needed
|
[
"# Dataset Card for \"manga_art_style_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"manga_art_style_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"manga_art_style_prompts\"\n\nMore Information needed"
] |
f9f32e9d92a9028391bf3c9b61dc576d91053448
|
# Dataset Card for "lotr-book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruslanasenov/lotr-book
|
[
"region:us"
] |
2023-09-08T14:25:37+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2432593, "num_examples": 1}], "download_size": 0, "dataset_size": 2432593}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T14:26:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "lotr-book"
More Information needed
|
[
"# Dataset Card for \"lotr-book\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"lotr-book\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"lotr-book\"\n\nMore Information needed"
] |
b9f0e603433c64ae39c79508fd9c963c9359f16a
|
# Dataset Card for "llm-tolkien"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruslanasenov/llm-tolkien
|
[
"region:us"
] |
2023-09-08T14:26:11+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 2196528.0, "num_examples": 268}, {"name": "test", "num_bytes": 245880.0, "num_examples": 30}], "download_size": 1124977, "dataset_size": 2442408.0}}
|
2023-09-08T14:26:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llm-tolkien"
More Information needed
|
[
"# Dataset Card for \"llm-tolkien\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llm-tolkien\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llm-tolkien\"\n\nMore Information needed"
] |
b9d6e884524edab14fbbd0927e2fa508995390e5
|
# Dataset Card for "shp_with_features_20k_flan_t5_large_flan_t5_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/shp_with_features_20k_flan_t5_large_flan_t5_zeroshot
|
[
"region:us"
] |
2023-09-08T14:44:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "post_id", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "history", "dtype": "string"}, {"name": "c_root_id_A", "dtype": "string"}, {"name": "c_root_id_B", "dtype": "string"}, {"name": "created_at_utc_A", "dtype": "int64"}, {"name": "created_at_utc_B", "dtype": "int64"}, {"name": "score_A", "dtype": "int64"}, {"name": "score_B", "dtype": "int64"}, {"name": "human_ref_A", "dtype": "string"}, {"name": "human_ref_B", "dtype": "string"}, {"name": "labels", "dtype": "int64"}, {"name": "seconds_difference", "dtype": "float64"}, {"name": "score_ratio", "dtype": "float64"}, {"name": "helpfulness_A", "dtype": "float64"}, {"name": "helpfulness_B", "dtype": "float64"}, {"name": "specificity_A", "dtype": "float64"}, {"name": "specificity_B", "dtype": "float64"}, {"name": "intent_A", "dtype": "float64"}, {"name": "intent_B", "dtype": "float64"}, {"name": "factuality_A", "dtype": "float64"}, {"name": "factuality_B", "dtype": "float64"}, {"name": "easy-to-understand_A", "dtype": "float64"}, {"name": "easy-to-understand_B", "dtype": "float64"}, {"name": "relevance_A", "dtype": "float64"}, {"name": "relevance_B", "dtype": "float64"}, {"name": "readability_A", "dtype": "float64"}, {"name": "readability_B", "dtype": "float64"}, {"name": "enough-detail_A", "dtype": "float64"}, {"name": "enough-detail_B", "dtype": "float64"}, {"name": "biased:_A", "dtype": "float64"}, {"name": "biased:_B", "dtype": "float64"}, {"name": "fail-to-consider-individual-preferences_A", "dtype": "float64"}, {"name": "fail-to-consider-individual-preferences_B", "dtype": "float64"}, {"name": "repetetive_A", "dtype": "float64"}, {"name": "repetetive_B", "dtype": "float64"}, {"name": "fail-to-consider-context_A", "dtype": "float64"}, {"name": "fail-to-consider-context_B", "dtype": "float64"}, {"name": "too-long_A", "dtype": "float64"}, {"name": "too-long_B", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "log_score_A", "dtype": "float64"}, {"name": "log_score_B", "dtype": "float64"}, {"name": "zeroshot_helpfulness_A", "dtype": "int64"}, {"name": "zeroshot_helpfulness_B", "dtype": "int64"}, {"name": "zeroshot_specificity_A", "dtype": "int64"}, {"name": "zeroshot_specificity_B", "dtype": "int64"}, {"name": "zeroshot_intent_A", "dtype": "int64"}, {"name": "zeroshot_intent_B", "dtype": "int64"}, {"name": "zeroshot_factuality_A", "dtype": "int64"}, {"name": "zeroshot_factuality_B", "dtype": "int64"}, {"name": "zeroshot_easy-to-understand_A", "dtype": "int64"}, {"name": "zeroshot_easy-to-understand_B", "dtype": "int64"}, {"name": "zeroshot_relevance_A", "dtype": "int64"}, {"name": "zeroshot_relevance_B", "dtype": "int64"}, {"name": "zeroshot_readability_A", "dtype": "int64"}, {"name": "zeroshot_readability_B", "dtype": "int64"}, {"name": "zeroshot_enough-detail_A", "dtype": "int64"}, {"name": "zeroshot_enough-detail_B", "dtype": "int64"}, {"name": "zeroshot_biased:_A", "dtype": "int64"}, {"name": "zeroshot_biased:_B", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_A", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-individual-preferences_B", "dtype": "int64"}, {"name": "zeroshot_repetetive_A", "dtype": "int64"}, {"name": "zeroshot_repetetive_B", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-context_A", "dtype": "int64"}, {"name": "zeroshot_fail-to-consider-context_B", "dtype": "int64"}, {"name": "zeroshot_too-long_A", "dtype": "int64"}, {"name": "zeroshot_too-long_B", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 22674534, "num_examples": 9459}, {"name": "test", "num_bytes": 22627412, "num_examples": 9459}], "download_size": 12130286, "dataset_size": 45301946}}
|
2023-09-08T14:45:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shp_with_features_20k_flan_t5_large_flan_t5_zeroshot"
More Information needed
|
[
"# Dataset Card for \"shp_with_features_20k_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shp_with_features_20k_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
6,
36
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shp_with_features_20k_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
e67d4171b142859d0f805fb3ac57edfd0642945f
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dongyoung4091/shp-generated_flan_t5_large_flan_t5_zeroshot
|
[
"region:us"
] |
2023-09-08T14:48:00+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "zeroshot_helpfulness", "dtype": "float64"}, {"name": "zeroshot_specificity", "dtype": "float64"}, {"name": "zeroshot_intent", "dtype": "float64"}, {"name": "zeroshot_factuality", "dtype": "float64"}, {"name": "zeroshot_easy-to-understand", "dtype": "int64"}, {"name": "zeroshot_relevance", "dtype": "int64"}, {"name": "zeroshot_readability", "dtype": "int64"}, {"name": "zeroshot_enough-detail", "dtype": "int64"}, {"name": "zeroshot_biased:", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-individual-preferences", "dtype": "float64"}, {"name": "zeroshot_repetetive", "dtype": "float64"}, {"name": "zeroshot_fail-to-consider-context", "dtype": "float64"}, {"name": "zeroshot_too-long", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 29519465, "num_examples": 25600}], "download_size": 1900231, "dataset_size": 29519465}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-09T01:42:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "shp-generated_flan_t5_large_flan_t5_zeroshot"
More Information needed
|
[
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"shp-generated_flan_t5_large_flan_t5_zeroshot\"\n\nMore Information needed"
] |
4b5450fc49eb3fb03a1794b021c38759a551da96
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
sanket29/dsaDemo
|
[
"region:us"
] |
2023-09-08T15:22:22+00:00
|
{}
|
2023-09-08T15:26:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8c0d7fa3617483e87616aeb4b9aab3e8e5794fe3
|
# Dataset Card for "autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T15:22:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 1584979274, "dataset_size": 2600840000}}
|
2023-09-08T15:23:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
38
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
b906d68fc7138c35f55cc66a689a5fcf29f41844
|
# Dataset Card for "autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T15:34:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 1062661836, "dataset_size": 2600840000}}
|
2023-09-08T15:34:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
37
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_pmlb_100000_magic_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
360c5d237dea73c8a8db39d293c74102f634a2ac
|
# Dataset Card for InToxiCat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Example](#example)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://zenodo.org/record/7973926 #TODO
- **Point of Contact:** [email protected]
### Dataset Summary
InToxiCat is a dataset for the detection of abusive language (defined by the aim to harm someone, individual, group, etc.) in Catalan, produced by the BSC LangTech unit.
The dataset consists of 29,809 sentences obtained from internet forums annotated as to whether or not they are abusive. The 6047 instances annotated as abusive are further annotated for the following features: abusive span, target span, target type and the implicit or explicit nature of the abusiveness in the message.
The dataset is split, in a balanced abusive/non-abusive distribution, into 23,847 training samples, 2981 validation samples, and 2981 test samples.
### Supported Tasks and Leaderboards
Abusive Language Detection
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Three JSON files, one for each split.
### Example:
<pre>
{
"id": "9472844_16_0",
"context": "Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques. A veure si li passaré com al Hollande i sortiré la factura del seu perruquer (o taxidermista, no sé)",
"sentence": "Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques.",
"topic": "Internacional",
"key_words": [
"puta"
],
"annotation": {
"is_abusive": "abusive",
"abusiveness_agreement": "full",
"context_needed": "no",
"abusive_spans": [
[
"no té ni puta idea",
"11:29"
]
],
"target_spans": [
[
"Aquest tiu",
"0:10"
]
],
"target_type": [
"INDIVIDUAL"
],
"is_implicit": "yes"
}
}
</pre>
### Data Fields
- ``id`` (a string feature): unique identifier of the instance.
- ``context`` (a string feature): complete text message from the user surrounding the sentence (it can coincide totally or only partially with the sentence).
- ``sentence`` (a string feature): text message where the abusiveness is evaluated.
- ``topic`` (a string feature): category from Racó Català forums where the sentence comes from.
- ``keywords`` (a list of strings): keywords used to select the candidate messages to annotate.
- ``context_needed`` (a string feature): "yes" / "no" if all the annotators consulted / did not consult the context to decide on the sentence's abusiveness, "maybe" if there was not agreement about it.
- ``is_abusive`` (a bool feature): "abusive" or "not_abusive".
- ``abusiveness_agreement`` (a string feature): "full" if the two annotators agreed on the abusiveness/not-abusiveness of the sentence, and "partial" if the abusiveness had to be decided by a third annotator.
- ``abusive_spans`` (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): the sequence of words that attribute to the text's abusiveness.
- ``is_implicit`` (a string): whether the abusiveness is explicit (contains a profanity, slur or threat) or implicit (does not contain a profanity or slur, but is likely to contain irony, sarcasm or similar resources).
- ``target_spans`` (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): if found in the message, the sequence(s) of words that refer to the target of the text's abusiveness.
- ``target_type`` (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): three possible categories. The categories are non-exclusive, as some targets may have a dual identity and more than one target may be detected in a single message.
- ``individual``: a famous person, a named person or an unnamed person interacting in the conversation.
- ``group``: considered to be a unit based on the same ethnicity, gender or sexual orientation, political affiliation, religious belief or something else.
- ``other``; e.g. an organization, a situation, an event, or an issue.
### Data Splits
* train.json: 23847 examples
* dev.json: 2981 examples
* test.json: 2981 examples
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The sentences to be annotated were collected from [Racó Català](https://www.racocatala.cat/forums) forums using a list of keywords (provided in Zenodo). The messages belong to different categories of Racó Català, specified in the "topic" field of the dataset. The length of the messages varies from one sentence to several sentences.
#### Who are the source language producers?
Anonymized users from Racó Català forums.
### Annotations
#### Annotation process
The annotation process was divided into the following two tasks, carried out in sequential order:
Task 1. The sentences (around 30.000) were annotated by two annotators as either abusive or not abusive. In case of ambiguity in the sentence, the annotators had the possibility to consult the context, i.e. the whole message of the user (if the sentence to be annotated was a segment contained in the message). In cases where annotators 1 and 2 disagreed about the abusiveness of a message, it was annotated by a third annotator. As a result, the sentences that are ultimately considered abusive are those that were initially annotated as abusive by both annotators or, in the case of an initial disagreement between them, those that were resolved as abusive by the third annotator.
Task 2. The sentences annotated as abusive (6047) in Task 1 were further annotated by the two main annotators for the following features, explained in the Summary section: abusive spans, implicit/explicit abusiveness, target spans, and target type.
The annotation guidelines are published and available on Zenodo.
#### Who are the annotators?
The annotators were qualified professionals with university education and a demonstrably excellent knowledge of Catalan (minimum level C1 or equivalent).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center ([email protected])
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [Creative Commons Attribution Non-commercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
[](https://doi.org/10.57967/hf/1719)
### Contributions
[N/A]
|
projecte-aina/InToxiCat
|
[
"task_categories:text-classification",
"task_categories:token-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-4.0",
"abusive-language-detection",
"abusive-language",
"toxic-language-detection",
"toxicity-detection",
"doi:10.57967/hf/1719",
"region:us"
] |
2023-09-08T15:46:54+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["ca"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification", "token-classification"], "pretty_name": "InToxiCat", "tags": ["abusive-language-detection", "abusive-language", "toxic-language-detection", "toxicity-detection"]}
|
2024-01-31T16:12:32+00:00
|
[] |
[
"ca"
] |
TAGS
#task_categories-text-classification #task_categories-token-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-Catalan #license-cc-by-nc-4.0 #abusive-language-detection #abusive-language #toxic-language-detection #toxicity-detection #doi-10.57967/hf/1719 #region-us
|
# Dataset Card for InToxiCat
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Example
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Website: URL #TODO
- Point of Contact: langtech@URL
### Dataset Summary
InToxiCat is a dataset for the detection of abusive language (defined by the aim to harm someone, individual, group, etc.) in Catalan, produced by the BSC LangTech unit.
The dataset consists of 29,809 sentences obtained from internet forums annotated as to whether or not they are abusive. The 6047 instances annotated as abusive are further annotated for the following features: abusive span, target span, target type and the implicit or explicit nature of the abusiveness in the message.
The dataset is split, in a balanced abusive/non-abusive distribution, into 23,847 training samples, 2981 validation samples, and 2981 test samples.
### Supported Tasks and Leaderboards
Abusive Language Detection
### Languages
The dataset is in Catalan ('ca-ES').
## Dataset Structure
### Data Instances
Three JSON files, one for each split.
### Example:
<pre>
{
"id": "9472844_16_0",
"context": "Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques. A veure si li passaré com al Hollande i sortiré la factura del seu perruquer (o taxidermista, no sé)",
"sentence": "Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques.",
"topic": "Internacional",
"key_words": [
"puta"
],
"annotation": {
"is_abusive": "abusive",
"abusiveness_agreement": "full",
"context_needed": "no",
"abusive_spans": [
[
"no té ni puta idea",
"11:29"
]
],
"target_spans": [
[
"Aquest tiu",
"0:10"
]
],
"target_type": [
"INDIVIDUAL"
],
"is_implicit": "yes"
}
}
</pre>
### Data Fields
- ''id'' (a string feature): unique identifier of the instance.
- ''context'' (a string feature): complete text message from the user surrounding the sentence (it can coincide totally or only partially with the sentence).
- ''sentence'' (a string feature): text message where the abusiveness is evaluated.
- ''topic'' (a string feature): category from Racó Català forums where the sentence comes from.
- ''keywords'' (a list of strings): keywords used to select the candidate messages to annotate.
- ''context_needed'' (a string feature): "yes" / "no" if all the annotators consulted / did not consult the context to decide on the sentence's abusiveness, "maybe" if there was not agreement about it.
- ''is_abusive'' (a bool feature): "abusive" or "not_abusive".
- ''abusiveness_agreement'' (a string feature): "full" if the two annotators agreed on the abusiveness/not-abusiveness of the sentence, and "partial" if the abusiveness had to be decided by a third annotator.
- ''abusive_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): the sequence of words that attribute to the text's abusiveness.
- ''is_implicit'' (a string): whether the abusiveness is explicit (contains a profanity, slur or threat) or implicit (does not contain a profanity or slur, but is likely to contain irony, sarcasm or similar resources).
- ''target_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): if found in the message, the sequence(s) of words that refer to the target of the text's abusiveness.
- ''target_type'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): three possible categories. The categories are non-exclusive, as some targets may have a dual identity and more than one target may be detected in a single message.
- ''individual'': a famous person, a named person or an unnamed person interacting in the conversation.
- ''group'': considered to be a unit based on the same ethnicity, gender or sexual orientation, political affiliation, religious belief or something else.
- ''other''; e.g. an organization, a situation, an event, or an issue.
### Data Splits
* URL: 23847 examples
* URL: 2981 examples
* URL: 2981 examples
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The sentences to be annotated were collected from Racó Català forums using a list of keywords (provided in Zenodo). The messages belong to different categories of Racó Català, specified in the "topic" field of the dataset. The length of the messages varies from one sentence to several sentences.
#### Who are the source language producers?
Anonymized users from Racó Català forums.
### Annotations
#### Annotation process
The annotation process was divided into the following two tasks, carried out in sequential order:
Task 1. The sentences (around 30.000) were annotated by two annotators as either abusive or not abusive. In case of ambiguity in the sentence, the annotators had the possibility to consult the context, i.e. the whole message of the user (if the sentence to be annotated was a segment contained in the message). In cases where annotators 1 and 2 disagreed about the abusiveness of a message, it was annotated by a third annotator. As a result, the sentences that are ultimately considered abusive are those that were initially annotated as abusive by both annotators or, in the case of an initial disagreement between them, those that were resolved as abusive by the third annotator.
Task 2. The sentences annotated as abusive (6047) in Task 1 were further annotated by the two main annotators for the following features, explained in the Summary section: abusive spans, implicit/explicit abusiveness, target spans, and target type.
The annotation guidelines are published and available on Zenodo.
#### Who are the annotators?
The annotators were qualified professionals with university education and a demonstrably excellent knowledge of Catalan (minimum level C1 or equivalent).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center (langtech@URL)
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Licensing Information
This work is licensed under a Creative Commons Attribution Non-commercial 4.0 International.
 in Catalan, produced by the BSC LangTech unit. \n\nThe dataset consists of 29,809 sentences obtained from internet forums annotated as to whether or not they are abusive. The 6047 instances annotated as abusive are further annotated for the following features: abusive span, target span, target type and the implicit or explicit nature of the abusiveness in the message.\n\nThe dataset is split, in a balanced abusive/non-abusive distribution, into 23,847 training samples, 2981 validation samples, and 2981 test samples.",
"### Supported Tasks and Leaderboards\n\nAbusive Language Detection",
"### Languages\n\nThe dataset is in Catalan ('ca-ES').",
"## Dataset Structure",
"### Data Instances\n\nThree JSON files, one for each split.",
"### Example:\n\n<pre>\n \n {\n \"id\": \"9472844_16_0\",\n \"context\": \"Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques. A veure si li passaré com al Hollande i sortiré la factura del seu perruquer (o taxidermista, no sé)\",\n \"sentence\": \"Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques.\",\n \"topic\": \"Internacional\",\n \"key_words\": [\n \"puta\"\n ],\n \"annotation\": {\n \"is_abusive\": \"abusive\",\n \"abusiveness_agreement\": \"full\",\n \"context_needed\": \"no\",\n \"abusive_spans\": [\n [\n \"no té ni puta idea\",\n \"11:29\"\n ]\n ],\n \"target_spans\": [\n [\n \"Aquest tiu\",\n \"0:10\"\n ]\n ],\n \"target_type\": [\n \"INDIVIDUAL\"\n ],\n \"is_implicit\": \"yes\"\n }\n }\n \n</pre>",
"### Data Fields\n\n- ''id'' (a string feature): unique identifier of the instance.\n- ''context'' (a string feature): complete text message from the user surrounding the sentence (it can coincide totally or only partially with the sentence).\n- ''sentence'' (a string feature): text message where the abusiveness is evaluated.\n- ''topic'' (a string feature): category from Racó Català forums where the sentence comes from.\n- ''keywords'' (a list of strings): keywords used to select the candidate messages to annotate.\n- ''context_needed'' (a string feature): \"yes\" / \"no\" if all the annotators consulted / did not consult the context to decide on the sentence's abusiveness, \"maybe\" if there was not agreement about it.\n- ''is_abusive'' (a bool feature): \"abusive\" or \"not_abusive\".\n- ''abusiveness_agreement'' (a string feature): \"full\" if the two annotators agreed on the abusiveness/not-abusiveness of the sentence, and \"partial\" if the abusiveness had to be decided by a third annotator.\n- ''abusive_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): the sequence of words that attribute to the text's abusiveness.\n- ''is_implicit'' (a string): whether the abusiveness is explicit (contains a profanity, slur or threat) or implicit (does not contain a profanity or slur, but is likely to contain irony, sarcasm or similar resources).\n- ''target_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): if found in the message, the sequence(s) of words that refer to the target of the text's abusiveness.\n- ''target_type'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): three possible categories. The categories are non-exclusive, as some targets may have a dual identity and more than one target may be detected in a single message. \n - ''individual'': a famous person, a named person or an unnamed person interacting in the conversation.\n - ''group'': considered to be a unit based on the same ethnicity, gender or sexual orientation, political affiliation, religious belief or something else.\n - ''other''; e.g. an organization, a situation, an event, or an issue.",
"### Data Splits\n\n* URL: 23847 examples\n* URL: 2981 examples\n* URL: 2981 examples",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this dataset to contribute to the development of language models in Catalan, a low-resource language.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe sentences to be annotated were collected from Racó Català forums using a list of keywords (provided in Zenodo). The messages belong to different categories of Racó Català, specified in the \"topic\" field of the dataset. The length of the messages varies from one sentence to several sentences.",
"#### Who are the source language producers?\n\nAnonymized users from Racó Català forums.",
"### Annotations",
"#### Annotation process\n\nThe annotation process was divided into the following two tasks, carried out in sequential order:\n\nTask 1. The sentences (around 30.000) were annotated by two annotators as either abusive or not abusive. In case of ambiguity in the sentence, the annotators had the possibility to consult the context, i.e. the whole message of the user (if the sentence to be annotated was a segment contained in the message). In cases where annotators 1 and 2 disagreed about the abusiveness of a message, it was annotated by a third annotator. As a result, the sentences that are ultimately considered abusive are those that were initially annotated as abusive by both annotators or, in the case of an initial disagreement between them, those that were resolved as abusive by the third annotator.\n\nTask 2. The sentences annotated as abusive (6047) in Task 1 were further annotated by the two main annotators for the following features, explained in the Summary section: abusive spans, implicit/explicit abusiveness, target spans, and target type.\n\nThe annotation guidelines are published and available on Zenodo.",
"#### Who are the annotators?\n\nThe annotators were qualified professionals with university education and a demonstrably excellent knowledge of Catalan (minimum level C1 or equivalent).",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this dataset contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\nLanguage Technologies Unit at the Barcelona Supercomputing Center (langtech@URL)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution Non-commercial 4.0 International.\n\n\n\n in Catalan, produced by the BSC LangTech unit. \n\nThe dataset consists of 29,809 sentences obtained from internet forums annotated as to whether or not they are abusive. The 6047 instances annotated as abusive are further annotated for the following features: abusive span, target span, target type and the implicit or explicit nature of the abusiveness in the message.\n\nThe dataset is split, in a balanced abusive/non-abusive distribution, into 23,847 training samples, 2981 validation samples, and 2981 test samples.",
"### Supported Tasks and Leaderboards\n\nAbusive Language Detection",
"### Languages\n\nThe dataset is in Catalan ('ca-ES').",
"## Dataset Structure",
"### Data Instances\n\nThree JSON files, one for each split.",
"### Example:\n\n<pre>\n \n {\n \"id\": \"9472844_16_0\",\n \"context\": \"Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques. A veure si li passaré com al Hollande i sortiré la factura del seu perruquer (o taxidermista, no sé)\",\n \"sentence\": \"Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques.\",\n \"topic\": \"Internacional\",\n \"key_words\": [\n \"puta\"\n ],\n \"annotation\": {\n \"is_abusive\": \"abusive\",\n \"abusiveness_agreement\": \"full\",\n \"context_needed\": \"no\",\n \"abusive_spans\": [\n [\n \"no té ni puta idea\",\n \"11:29\"\n ]\n ],\n \"target_spans\": [\n [\n \"Aquest tiu\",\n \"0:10\"\n ]\n ],\n \"target_type\": [\n \"INDIVIDUAL\"\n ],\n \"is_implicit\": \"yes\"\n }\n }\n \n</pre>",
"### Data Fields\n\n- ''id'' (a string feature): unique identifier of the instance.\n- ''context'' (a string feature): complete text message from the user surrounding the sentence (it can coincide totally or only partially with the sentence).\n- ''sentence'' (a string feature): text message where the abusiveness is evaluated.\n- ''topic'' (a string feature): category from Racó Català forums where the sentence comes from.\n- ''keywords'' (a list of strings): keywords used to select the candidate messages to annotate.\n- ''context_needed'' (a string feature): \"yes\" / \"no\" if all the annotators consulted / did not consult the context to decide on the sentence's abusiveness, \"maybe\" if there was not agreement about it.\n- ''is_abusive'' (a bool feature): \"abusive\" or \"not_abusive\".\n- ''abusiveness_agreement'' (a string feature): \"full\" if the two annotators agreed on the abusiveness/not-abusiveness of the sentence, and \"partial\" if the abusiveness had to be decided by a third annotator.\n- ''abusive_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): the sequence of words that attribute to the text's abusiveness.\n- ''is_implicit'' (a string): whether the abusiveness is explicit (contains a profanity, slur or threat) or implicit (does not contain a profanity or slur, but is likely to contain irony, sarcasm or similar resources).\n- ''target_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): if found in the message, the sequence(s) of words that refer to the target of the text's abusiveness.\n- ''target_type'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): three possible categories. The categories are non-exclusive, as some targets may have a dual identity and more than one target may be detected in a single message. \n - ''individual'': a famous person, a named person or an unnamed person interacting in the conversation.\n - ''group'': considered to be a unit based on the same ethnicity, gender or sexual orientation, political affiliation, religious belief or something else.\n - ''other''; e.g. an organization, a situation, an event, or an issue.",
"### Data Splits\n\n* URL: 23847 examples\n* URL: 2981 examples\n* URL: 2981 examples",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this dataset to contribute to the development of language models in Catalan, a low-resource language.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe sentences to be annotated were collected from Racó Català forums using a list of keywords (provided in Zenodo). The messages belong to different categories of Racó Català, specified in the \"topic\" field of the dataset. The length of the messages varies from one sentence to several sentences.",
"#### Who are the source language producers?\n\nAnonymized users from Racó Català forums.",
"### Annotations",
"#### Annotation process\n\nThe annotation process was divided into the following two tasks, carried out in sequential order:\n\nTask 1. The sentences (around 30.000) were annotated by two annotators as either abusive or not abusive. In case of ambiguity in the sentence, the annotators had the possibility to consult the context, i.e. the whole message of the user (if the sentence to be annotated was a segment contained in the message). In cases where annotators 1 and 2 disagreed about the abusiveness of a message, it was annotated by a third annotator. As a result, the sentences that are ultimately considered abusive are those that were initially annotated as abusive by both annotators or, in the case of an initial disagreement between them, those that were resolved as abusive by the third annotator.\n\nTask 2. The sentences annotated as abusive (6047) in Task 1 were further annotated by the two main annotators for the following features, explained in the Summary section: abusive spans, implicit/explicit abusiveness, target spans, and target type.\n\nThe annotation guidelines are published and available on Zenodo.",
"#### Who are the annotators?\n\nThe annotators were qualified professionals with university education and a demonstrably excellent knowledge of Catalan (minimum level C1 or equivalent).",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this dataset contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\nLanguage Technologies Unit at the Barcelona Supercomputing Center (langtech@URL)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution Non-commercial 4.0 International.\n\n\n\n in Catalan, produced by the BSC LangTech unit. \n\nThe dataset consists of 29,809 sentences obtained from internet forums annotated as to whether or not they are abusive. The 6047 instances annotated as abusive are further annotated for the following features: abusive span, target span, target type and the implicit or explicit nature of the abusiveness in the message.\n\nThe dataset is split, in a balanced abusive/non-abusive distribution, into 23,847 training samples, 2981 validation samples, and 2981 test samples.### Supported Tasks and Leaderboards\n\nAbusive Language Detection### Languages\n\nThe dataset is in Catalan ('ca-ES').## Dataset Structure### Data Instances\n\nThree JSON files, one for each split.",
"passage: ### Example:\n\n<pre>\n \n {\n \"id\": \"9472844_16_0\",\n \"context\": \"Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques. A veure si li passaré com al Hollande i sortiré la factura del seu perruquer (o taxidermista, no sé)\",\n \"sentence\": \"Aquest tiu no té ni puta idea del que és una guerra ni del que s'espera d'un soldat.I què s'empatolla de despeses mèdiques.\",\n \"topic\": \"Internacional\",\n \"key_words\": [\n \"puta\"\n ],\n \"annotation\": {\n \"is_abusive\": \"abusive\",\n \"abusiveness_agreement\": \"full\",\n \"context_needed\": \"no\",\n \"abusive_spans\": [\n [\n \"no té ni puta idea\",\n \"11:29\"\n ]\n ],\n \"target_spans\": [\n [\n \"Aquest tiu\",\n \"0:10\"\n ]\n ],\n \"target_type\": [\n \"INDIVIDUAL\"\n ],\n \"is_implicit\": \"yes\"\n }\n }\n \n</pre>",
"passage: ### Data Fields\n\n- ''id'' (a string feature): unique identifier of the instance.\n- ''context'' (a string feature): complete text message from the user surrounding the sentence (it can coincide totally or only partially with the sentence).\n- ''sentence'' (a string feature): text message where the abusiveness is evaluated.\n- ''topic'' (a string feature): category from Racó Català forums where the sentence comes from.\n- ''keywords'' (a list of strings): keywords used to select the candidate messages to annotate.\n- ''context_needed'' (a string feature): \"yes\" / \"no\" if all the annotators consulted / did not consult the context to decide on the sentence's abusiveness, \"maybe\" if there was not agreement about it.\n- ''is_abusive'' (a bool feature): \"abusive\" or \"not_abusive\".\n- ''abusiveness_agreement'' (a string feature): \"full\" if the two annotators agreed on the abusiveness/not-abusiveness of the sentence, and \"partial\" if the abusiveness had to be decided by a third annotator.\n- ''abusive_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): the sequence of words that attribute to the text's abusiveness.\n- ''is_implicit'' (a string): whether the abusiveness is explicit (contains a profanity, slur or threat) or implicit (does not contain a profanity or slur, but is likely to contain irony, sarcasm or similar resources).\n- ''target_spans'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): if found in the message, the sequence(s) of words that refer to the target of the text's abusiveness.\n- ''target_type'' (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): three possible categories. The categories are non-exclusive, as some targets may have a dual identity and more than one target may be detected in a single message. \n - ''individual'': a famous person, a named person or an unnamed person interacting in the conversation.\n - ''group'': considered to be a unit based on the same ethnicity, gender or sexual orientation, political affiliation, religious belief or something else.\n - ''other''; e.g. an organization, a situation, an event, or an issue.### Data Splits\n\n* URL: 23847 examples\n* URL: 2981 examples\n* URL: 2981 examples## Dataset Creation### Curation Rationale\n\nWe created this dataset to contribute to the development of language models in Catalan, a low-resource language.### Source Data#### Initial Data Collection and Normalization\n\nThe sentences to be annotated were collected from Racó Català forums using a list of keywords (provided in Zenodo). The messages belong to different categories of Racó Català, specified in the \"topic\" field of the dataset. The length of the messages varies from one sentence to several sentences.#### Who are the source language producers?\n\nAnonymized users from Racó Català forums.### Annotations#### Annotation process\n\nThe annotation process was divided into the following two tasks, carried out in sequential order:\n\nTask 1. The sentences (around 30.000) were annotated by two annotators as either abusive or not abusive. In case of ambiguity in the sentence, the annotators had the possibility to consult the context, i.e. the whole message of the user (if the sentence to be annotated was a segment contained in the message). In cases where annotators 1 and 2 disagreed about the abusiveness of a message, it was annotated by a third annotator. As a result, the sentences that are ultimately considered abusive are those that were initially annotated as abusive by both annotators or, in the case of an initial disagreement between them, those that were resolved as abusive by the third annotator.\n\nTask 2. The sentences annotated as abusive (6047) in Task 1 were further annotated by the two main annotators for the following features, explained in the Summary section: abusive spans, implicit/explicit abusiveness, target spans, and target type.\n\nThe annotation guidelines are published and available on Zenodo.#### Who are the annotators?\n\nThe annotators were qualified professionals with university education and a demonstrably excellent knowledge of Catalan (minimum level C1 or equivalent).### Personal and Sensitive Information\n\nNo personal or sensitive information included.## Considerations for Using the Data"
] |
ba8f907aa3ddfc066e89c25d7a01a2152551ef9b
|
# Dataset Card for "autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T15:50:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 1048362149, "dataset_size": 2600840000}}
|
2023-09-08T15:50:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
39
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_MagicTelescope_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
47b6fa80eb6fb6e1875092ed39abf1ffcc152ec4
|

# 📔 **DATASET**
| **Dataset** | Class | Number of Questions |
| ------- | ----------------------------------------------------------------- | ------------------------ |
| **FLAN_CoT(zs)** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense | 8000 |
| **Prm800k** | Reasoning 、 MATH | 6713 |
| **ScienceQA** | ScienceQA | 5177 |
| **SciBench** | ScienceQA | 695 |
| **ReClor** | Reasoning | 1624 |
| **TheoremQA** | Commonsense 、 MATH 、 ScienceQA | 800 |
| **OpenBookQA** | Text_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA | 5957 |
| **ARB** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text_Understanding | 605 |
| **Openassistant-guanaco** | Commonsense 、 Text_Understanding 、 Reasoning | 802 |
| **SAT** | Text_Understanding 、 Reasoning 、 MATH | 426 |
| **GRE、GMAT** | Reasoning 、 MATH | 254 |
| **AMC、AIME** | Reasoning 、 MATH | 1000 |
| **LSAT** | Reasoning 、 LAW | 1009 |
# 📌 **Methon**
## *Improving the dataset*
Based on the content of the "Textbooks are all you need" paper, We want to try fine-tuning using advanced questions.
## *Dataset Format Definition*
Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform.
```
{
"input": "",
"output": "",
"instruction": ""
}
```
- ### [FLAN_V2 COT(ZS)](https://huggingface.co/datasets/conceptofmind/cot_submix_original/tree/main)
We only extract the 'zs_opt' from COT and categorize each task.
- ### SAT、GRE、GMAT、AMC、AIME、LSAT
We will configure the input for datasets such as GRE, GMAT, SAT etc. as "Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation." Meanwhile, for the math input, it will be set as "Please provide the answer along with a corresponding explanation based on the given question." Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.
Furthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.
Lastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.
- ### [OTHER](https://github.com/arielnlee/Platypus/tree/main/data_pipeline)
Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.
## *Sampling Algorithms*
Since the flan_v2 cot dataset includes tasks like:
- cot_esnli
- cot_strategyqa
- cot_qasc
- stream_qed
- cot_gsm8k
- cot_ecqa
- cot_creak
- stream_aqua
To ensure this dataset contains diverse high-quality data, we first select zs_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.
```py
import json
import random
with open("cot_ORIGINAL.json", "r") as f:
abc = json.load(f)
# --- part1 ---
zsopt_data = [] # "zs_opt"
for i in abc :
if i["template_type"] == "zs_opt":
zsopt_data.append(i)
# --- part2 ---
output_lengths = [len(i["targets"]) for i in zsopt_data]
average_length = sum(output_lengths) / len(output_lengths) # average length
filtered_data = []
for a in zsopt_data:
if len(a["targets"]) >= average_length:
filtered_data.append(a) # output length need to >= average_length
class_counts = {} # Count the number of samples for each class
for a in filtered_data:
task_name = a["task_name"]
if task_name in class_counts:
class_counts[task_name] += 1
else:
class_counts[task_name] = 1
# --- part3 ---
total_samples = 8000 # we plan to select a total of 8000 samples
sample_ratios = {}
for task_name, count in class_counts.items():
sample_ratios[task_name] = count / len(filtered_data)
sample_sizes = {}
for task_name, sample_ratio in sample_ratios.items():
sample_sizes[task_name] = round(sample_ratio * total_samples)
stratified_samples = {} # Perform stratified sampling for each class
for task_name, sample_size in sample_sizes.items():
class_samples = []
for data in filtered_data:
if data["task_name"] == task_name:
class_samples.append(data)
selected_samples = random.sample(class_samples, sample_size)
stratified_samples[task_name] = selected_samples
final_samples = [] # Convert to the specified format
for task_name, samples in stratified_samples.items():
for sample in samples:
final_samples.append(
{
"input": "", # use ""
"output": sample["targets"], # output
"instruction": sample["inputs"], # question
}
)
with open("cot_change.json", "w") as f:
json.dump(final_samples, f, indent=2)
```
LSAT arranged according to LEVEL
```py
import json
with open("math-json.json", "r", encoding="utf-8") as f:
data_list = json.load(f)
sorted_data = sorted(data_list, key=lambda x: x["other"]["level"])
output_data = [
{
"input": "Please provide the answer along with a corresponding explanation based on the given question.",
"output": f"{item['answer']},solution:{item['other']['solution']}",
"instruction": item["question"],
}
for item in sorted_data
]
with open("math_convert.json", "w", encoding="utf-8") as output_file:
json.dump(output_data, output_file, ensure_ascii=False, indent=4)
```
|
huangyt/FINETUNE3
|
[
"license:openrail",
"region:us"
] |
2023-09-08T15:55:22+00:00
|
{"license": "openrail"}
|
2023-09-11T12:42:55+00:00
|
[] |
[] |
TAGS
#license-openrail #region-us
|
!Change can be sunshine if you let it in..png
DATASET
=======
Dataset: FLAN\_CoT(zs), Class: Reasoning 、 MATH 、 ScienceQA 、 Commonsense, Number of Questions: 8000
Dataset: Prm800k, Class: Reasoning 、 MATH, Number of Questions: 6713
Dataset: ScienceQA, Class: ScienceQA, Number of Questions: 5177
Dataset: SciBench, Class: ScienceQA, Number of Questions: 695
Dataset: ReClor, Class: Reasoning, Number of Questions: 1624
Dataset: TheoremQA, Class: Commonsense 、 MATH 、 ScienceQA, Number of Questions: 800
Dataset: OpenBookQA, Class: Text\_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA, Number of Questions: 5957
Dataset: ARB, Class: Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text\_Understanding, Number of Questions: 605
Dataset: Openassistant-guanaco, Class: Commonsense 、 Text\_Understanding 、 Reasoning, Number of Questions: 802
Dataset: SAT, Class: Text\_Understanding 、 Reasoning 、 MATH, Number of Questions: 426
Dataset: GRE、GMAT, Class: Reasoning 、 MATH, Number of Questions: 254
Dataset: AMC、AIME, Class: Reasoning 、 MATH, Number of Questions: 1000
Dataset: LSAT, Class: Reasoning 、 LAW, Number of Questions: 1009
Methon
======
*Improving the dataset*
-----------------------
Based on the content of the "Textbooks are all you need" paper, We want to try fine-tuning using advanced questions.
*Dataset Format Definition*
---------------------------
Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform.
* ### FLAN\_V2 COT(ZS)
We only extract the 'zs\_opt' from COT and categorize each task.
* ### SAT、GRE、GMAT、AMC、AIME、LSAT
We will configure the input for datasets such as GRE, GMAT, SAT etc. as "Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation." Meanwhile, for the math input, it will be set as "Please provide the answer along with a corresponding explanation based on the given question." Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.
Furthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.
Lastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.
* ### OTHER
Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.
*Sampling Algorithms*
---------------------
Since the flan\_v2 cot dataset includes tasks like:
* cot\_esnli
* cot\_strategyqa
* cot\_qasc
* stream\_qed
* cot\_gsm8k
* cot\_ecqa
* cot\_creak
* stream\_aqua
To ensure this dataset contains diverse high-quality data, we first select zs\_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.
LSAT arranged according to LEVEL
|
[
"### FLAN\\_V2 COT(ZS)\n\n\nWe only extract the 'zs\\_opt' from COT and categorize each task.\n* ### SAT、GRE、GMAT、AMC、AIME、LSAT\n\n\nWe will configure the input for datasets such as GRE, GMAT, SAT etc. as \"Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation.\" Meanwhile, for the math input, it will be set as \"Please provide the answer along with a corresponding explanation based on the given question.\" Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.\n\n\nFurthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.\n\n\nLastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.\n* ### OTHER\n\n\nPrm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.\n\n\n*Sampling Algorithms*\n---------------------\n\n\nSince the flan\\_v2 cot dataset includes tasks like:\n\n\n* cot\\_esnli\n* cot\\_strategyqa\n* cot\\_qasc\n* stream\\_qed\n* cot\\_gsm8k\n* cot\\_ecqa\n* cot\\_creak\n* stream\\_aqua\n\n\nTo ensure this dataset contains diverse high-quality data, we first select zs\\_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.\n\n\nLSAT arranged according to LEVEL"
] |
[
"TAGS\n#license-openrail #region-us \n",
"### FLAN\\_V2 COT(ZS)\n\n\nWe only extract the 'zs\\_opt' from COT and categorize each task.\n* ### SAT、GRE、GMAT、AMC、AIME、LSAT\n\n\nWe will configure the input for datasets such as GRE, GMAT, SAT etc. as \"Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation.\" Meanwhile, for the math input, it will be set as \"Please provide the answer along with a corresponding explanation based on the given question.\" Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.\n\n\nFurthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.\n\n\nLastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.\n* ### OTHER\n\n\nPrm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.\n\n\n*Sampling Algorithms*\n---------------------\n\n\nSince the flan\\_v2 cot dataset includes tasks like:\n\n\n* cot\\_esnli\n* cot\\_strategyqa\n* cot\\_qasc\n* stream\\_qed\n* cot\\_gsm8k\n* cot\\_ecqa\n* cot\\_creak\n* stream\\_aqua\n\n\nTo ensure this dataset contains diverse high-quality data, we first select zs\\_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.\n\n\nLSAT arranged according to LEVEL"
] |
[
12,
611
] |
[
"passage: TAGS\n#license-openrail #region-us \n"
] |
518fcd2c8b89917c7696770672688217a2eabf88
|
Dataset Classes
* joy:0
* sad:1
* anger:2
* disgust:3
* fear:4
* surprise:5
|
SeyedAli/Persian-Text-Emotion
|
[
"task_categories:text-classification",
"language:fa",
"license:mit",
"region:us"
] |
2023-09-08T16:28:40+00:00
|
{"language": ["fa"], "license": "mit", "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1612793, "num_examples": 5558}, {"name": "test", "num_bytes": 409414, "num_examples": 1390}], "download_size": 1143196, "dataset_size": 2022207}}
|
2023-09-09T14:44:06+00:00
|
[] |
[
"fa"
] |
TAGS
#task_categories-text-classification #language-Persian #license-mit #region-us
|
Dataset Classes
* joy:0
* sad:1
* anger:2
* disgust:3
* fear:4
* surprise:5
|
[] |
[
"TAGS\n#task_categories-text-classification #language-Persian #license-mit #region-us \n"
] |
[
27
] |
[
"passage: TAGS\n#task_categories-text-classification #language-Persian #license-mit #region-us \n"
] |
4843707dfcfe06fe806988f22d472ffef10f88df
|
# Dataset Card for "autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T16:38:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 809891483, "dataset_size": 2600840000}}
|
2023-09-08T16:38:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
37
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_pmlb_100000_twonorm_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
7254789e183b1f79ba750b2fefc51f9b4a60d433
|
# Dataset of rurina/ルリナ/야청 (Pokémon)
This is the dataset of rurina/ルリナ/야청 (Pokémon), containing 500 images and their tags.
The core tags of this character are `dark-skinned_female, dark_skin, multicolored_hair, long_hair, two-tone_hair, black_hair, blue_hair, blue_eyes, earrings, hoop_earrings, hair_bun, single_hair_bun, breasts, eyeshadow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 710.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rurina_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 375.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rurina_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1296 | 827.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rurina_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 617.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rurina_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1296 | 1.17 GiB | [Download](https://huggingface.co/datasets/CyberHarem/rurina_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/rurina_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, armlet, belly_chain, crop_top, holding_poke_ball, midriff, navel, poke_ball_(basic), single_glove, solo, cowboy_shot, looking_at_viewer, necklace, tankini, closed_mouth, hand_on_hip, smile, bike_shorts, blue_eyeshadow, partially_fingerless_gloves |
| 1 | 6 |  |  |  |  |  | 1girl, armlet, belly_chain, crop_top, holding_poke_ball, looking_at_viewer, makeup, midriff, navel, necklace, poke_ball_(basic), single_glove, solo, cowboy_shot, parted_lips, tankini, very_long_hair, bike_shorts, smile |
| 2 | 6 |  |  |  |  |  | 1girl, armlet, belly_chain, crop_top, full_body, holding_poke_ball, looking_at_viewer, midriff, poke_ball_(basic), sandals, simple_background, single_glove, solo, tankini, white_background, makeup, navel, necklace, shorts, hand_on_hip, standing, parted_lips, sportswear |
| 3 | 5 |  |  |  |  |  | 1girl, armlet, belly_chain, crop_top, holding_poke_ball, looking_at_viewer, makeup, medium_breasts, midriff, navel, necklace, poke_ball_(basic), shorts, single_glove, solo, tankini, armpits, sportswear, arms_up, very_long_hair, arm_up, blue_background, cowboy_shot, parted_lips |
| 4 | 10 |  |  |  |  |  | 1girl, holding_poke_ball, poke_ball_(basic), armlet, looking_at_viewer, necklace, solo, gloves, upper_body, closed_mouth, crop_top, blue_eyeshadow |
| 5 | 5 |  |  |  |  |  | 1girl, blue_eyeshadow, jewelry, simple_background, solo, bare_shoulders, closed_mouth, eyelashes, looking_at_viewer, upper_body, white_background, armlet, from_side |
| 6 | 9 |  |  |  |  |  | 1girl, blue_eyeshadow, jewelry, off-shoulder_shirt, solo, alternate_costume, bare_shoulders, hat, looking_at_viewer, sidelocks, eyelashes, smile, closed_mouth, black_headwear, blue_shirt, collarbone, grey_headwear, holding, long_sleeves, food |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | armlet | belly_chain | crop_top | holding_poke_ball | midriff | navel | poke_ball_(basic) | single_glove | solo | cowboy_shot | looking_at_viewer | necklace | tankini | closed_mouth | hand_on_hip | smile | bike_shorts | blue_eyeshadow | partially_fingerless_gloves | makeup | parted_lips | very_long_hair | full_body | sandals | simple_background | white_background | shorts | standing | sportswear | medium_breasts | armpits | arms_up | arm_up | blue_background | gloves | upper_body | jewelry | bare_shoulders | eyelashes | from_side | off-shoulder_shirt | alternate_costume | hat | sidelocks | black_headwear | blue_shirt | collarbone | grey_headwear | holding | long_sleeves | food |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:--------------|:-----------|:--------------------|:----------|:--------|:--------------------|:---------------|:-------|:--------------|:--------------------|:-----------|:----------|:---------------|:--------------|:--------|:--------------|:-----------------|:------------------------------|:---------|:--------------|:-----------------|:------------|:----------|:--------------------|:-------------------|:---------|:-----------|:-------------|:-----------------|:----------|:----------|:---------|:------------------|:---------|:-------------|:----------|:-----------------|:------------|:------------|:---------------------|:--------------------|:------|:------------|:-----------------|:-------------|:-------------|:----------------|:----------|:---------------|:-------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | X | X | | X | | | | | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | X | X | X | | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | X | | X | X | | | X | | X | | X | X | | X | | | | X | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | | | | | | | X | | X | | | X | | | | X | | | | | | | X | X | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | | | | | | | | | X | | X | | | X | | X | | X | | | | | | | | | | | | | | | | | | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/rurina_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-08T16:47:06+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T16:30:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of rurina/ルリナ/야청 (Pokémon)
==================================
This is the dataset of rurina/ルリナ/야청 (Pokémon), containing 500 images and their tags.
The core tags of this character are 'dark-skinned\_female, dark\_skin, multicolored\_hair, long\_hair, two-tone\_hair, black\_hair, blue\_hair, blue\_eyes, earrings, hoop\_earrings, hair\_bun, single\_hair\_bun, breasts, eyeshadow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
4d93629b0d74bcf0d3887c5ab2e43de33b967c06
|
# Dataset Card for "nyannyan_blip2_captions"
A dataset consisting photos of my beloved cat named Nyannyan, with captions generated using [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b).
Motivation of this dataset is to be used as a sample dataset for fine-tuning unconditional or text-to-image models.
All images are photographed using a cellphone camera between the years 2013 and 2020, and saved in JPEG format.
## License
Creative Commons Attribution Non Commercial 4.0
## Author
[STomoya](https://huggingface.co/STomoya)
|
STomoya/nyannyan_blip2_captions
|
[
"task_categories:text-to-image",
"task_categories:unconditional-image-generation",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-09-08T17:04:16+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["n<1K"], "task_categories": ["text-to-image", "unconditional-image-generation"], "pretty_name": "Nyannyan", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 534134738, "num_examples": 296}], "download_size": 533953151, "dataset_size": 534134738}}
|
2023-09-08T17:39:01+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-to-image #task_categories-unconditional-image-generation #size_categories-n<1K #language-English #license-cc-by-nc-4.0 #region-us
|
# Dataset Card for "nyannyan_blip2_captions"
A dataset consisting photos of my beloved cat named Nyannyan, with captions generated using Salesforce/blip2-opt-2.7b.
Motivation of this dataset is to be used as a sample dataset for fine-tuning unconditional or text-to-image models.
All images are photographed using a cellphone camera between the years 2013 and 2020, and saved in JPEG format.
## License
Creative Commons Attribution Non Commercial 4.0
## Author
STomoya
|
[
"# Dataset Card for \"nyannyan_blip2_captions\"\n\nA dataset consisting photos of my beloved cat named Nyannyan, with captions generated using Salesforce/blip2-opt-2.7b.\nMotivation of this dataset is to be used as a sample dataset for fine-tuning unconditional or text-to-image models.\nAll images are photographed using a cellphone camera between the years 2013 and 2020, and saved in JPEG format.",
"## License\n\nCreative Commons Attribution Non Commercial 4.0",
"## Author\n\nSTomoya"
] |
[
"TAGS\n#task_categories-text-to-image #task_categories-unconditional-image-generation #size_categories-n<1K #language-English #license-cc-by-nc-4.0 #region-us \n",
"# Dataset Card for \"nyannyan_blip2_captions\"\n\nA dataset consisting photos of my beloved cat named Nyannyan, with captions generated using Salesforce/blip2-opt-2.7b.\nMotivation of this dataset is to be used as a sample dataset for fine-tuning unconditional or text-to-image models.\nAll images are photographed using a cellphone camera between the years 2013 and 2020, and saved in JPEG format.",
"## License\n\nCreative Commons Attribution Non Commercial 4.0",
"## Author\n\nSTomoya"
] |
[
58,
107,
10,
5
] |
[
"passage: TAGS\n#task_categories-text-to-image #task_categories-unconditional-image-generation #size_categories-n<1K #language-English #license-cc-by-nc-4.0 #region-us \n# Dataset Card for \"nyannyan_blip2_captions\"\n\nA dataset consisting photos of my beloved cat named Nyannyan, with captions generated using Salesforce/blip2-opt-2.7b.\nMotivation of this dataset is to be used as a sample dataset for fine-tuning unconditional or text-to-image models.\nAll images are photographed using a cellphone camera between the years 2013 and 2020, and saved in JPEG format.## License\n\nCreative Commons Attribution Non Commercial 4.0## Author\n\nSTomoya"
] |
1ea1e02a64c1574771bf255918c36fa5c03145e5
|
Dataset Classes
* negetive :0
* positive :1
|
SeyedAli/Persian-Text-Sentiment
|
[
"task_categories:text-classification",
"language:fa",
"license:mit",
"region:us"
] |
2023-09-08T17:09:45+00:00
|
{"language": ["fa"], "license": "mit", "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10222986, "num_examples": 55852}, {"name": "test", "num_bytes": 2575303, "num_examples": 13964}], "download_size": 6076096, "dataset_size": 12798289}}
|
2023-09-09T14:42:06+00:00
|
[] |
[
"fa"
] |
TAGS
#task_categories-text-classification #language-Persian #license-mit #region-us
|
Dataset Classes
* negetive :0
* positive :1
|
[] |
[
"TAGS\n#task_categories-text-classification #language-Persian #license-mit #region-us \n"
] |
[
27
] |
[
"passage: TAGS\n#task_categories-text-classification #language-Persian #license-mit #region-us \n"
] |
e345d0871df34f7e44c6f44438f5ed93a0811eb5
|
# Dataset Card for "spider-natsql-wikisql-instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bugdaryan/spider-natsql-wikisql-instruct
|
[
"region:us"
] |
2023-09-08T17:14:12+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73793618, "num_examples": 92413}], "download_size": 19744066, "dataset_size": 73793618}}
|
2023-09-08T17:28:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "spider-natsql-wikisql-instruct"
More Information needed
|
[
"# Dataset Card for \"spider-natsql-wikisql-instruct\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spider-natsql-wikisql-instruct\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spider-natsql-wikisql-instruct\"\n\nMore Information needed"
] |
2ead6e8ca29d2431a017a30b5e21349a50d6935e
|
# Dataset Card for "autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T17:20:52+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 1613809341, "dataset_size": 2600840000}}
|
2023-09-08T17:21:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
39
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
e9548387bef276a344a27eddcad5077074ae4104
|
# Dataset Card for "fc019c9e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/fc019c9e
|
[
"region:us"
] |
2023-09-08T17:39:16+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1340, "dataset_size": 182}}
|
2023-09-08T17:39:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fc019c9e"
More Information needed
|
[
"# Dataset Card for \"fc019c9e\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fc019c9e\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fc019c9e\"\n\nMore Information needed"
] |
5fd29d01e761a0670db9b3261aa227ad05ef3fe4
|
# Dataset Card for "autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0
|
[
"region:us"
] |
2023-09-08T17:45:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2364400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 236440000, "num_examples": 10000}], "download_size": 1611785428, "dataset_size": 2600840000}}
|
2023-09-08T17:46:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
[
6,
37
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0\"\n\nMore Information needed"
] |
bcb699082923ab6a25f121f6701e0e8e0a9640e0
| ERROR: type should be string, got "\nhttps://www.kaggle.com/datasets/mustfkeskin/turkish-movie-sentiment-analysis-dataset" |
TFLai/turkish_movie_sentiment
|
[
"language:tr",
"license:cc0-1.0",
"region:us"
] |
2023-09-08T18:09:29+00:00
|
{"language": ["tr"], "license": "cc0-1.0"}
|
2023-10-03T11:21:31+00:00
|
[] |
[
"tr"
] |
TAGS
#language-Turkish #license-cc0-1.0 #region-us
|
URL
|
[] |
[
"TAGS\n#language-Turkish #license-cc0-1.0 #region-us \n"
] |
[
20
] |
[
"passage: TAGS\n#language-Turkish #license-cc0-1.0 #region-us \n"
] |
f6e24654398c03ee007f43add885d911b93a0a6d
|
# Dataset Card for "attempt_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ninjaiam/attempt_2
|
[
"region:us"
] |
2023-09-08T18:12:05+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1380303, "num_examples": 5011}], "download_size": 525686, "dataset_size": 1380303}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T18:12:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "attempt_2"
More Information needed
|
[
"# Dataset Card for \"attempt_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"attempt_2\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"attempt_2\"\n\nMore Information needed"
] |
271ba4118831eabdf9b95a435ec4af40672f8e0d
|
# GuacaMol: Benchmarks for Molecular Design

For an in-depth explanation of the types of benchmarks and baseline scores,
please consult the paper
[Benchmarking Models for De Novo Molecular Design](https://arxiv.org/abs/1811.09621)
## Leaderboard
See [https://www.benevolent.com/guacamol](https://www.benevolent.com/guacamol).
|
katielink/GuacaMol
|
[
"license:mit",
"chemistry",
"molecular design",
"arxiv:1811.09621",
"region:us"
] |
2023-09-08T18:14:20+00:00
|
{"license": "mit", "tags": ["chemistry", "molecular design"]}
|
2023-09-08T18:19:45+00:00
|
[
"1811.09621"
] |
[] |
TAGS
#license-mit #chemistry #molecular design #arxiv-1811.09621 #region-us
|
# GuacaMol: Benchmarks for Molecular Design
!guacamol
For an in-depth explanation of the types of benchmarks and baseline scores,
please consult the paper
Benchmarking Models for De Novo Molecular Design
## Leaderboard
See URL
|
[
"# GuacaMol: Benchmarks for Molecular Design\n\n!guacamol\n\nFor an in-depth explanation of the types of benchmarks and baseline scores,\nplease consult the paper \nBenchmarking Models for De Novo Molecular Design",
"## Leaderboard\n\nSee URL"
] |
[
"TAGS\n#license-mit #chemistry #molecular design #arxiv-1811.09621 #region-us \n",
"# GuacaMol: Benchmarks for Molecular Design\n\n!guacamol\n\nFor an in-depth explanation of the types of benchmarks and baseline scores,\nplease consult the paper \nBenchmarking Models for De Novo Molecular Design",
"## Leaderboard\n\nSee URL"
] |
[
29,
51,
5
] |
[
"passage: TAGS\n#license-mit #chemistry #molecular design #arxiv-1811.09621 #region-us \n# GuacaMol: Benchmarks for Molecular Design\n\n!guacamol\n\nFor an in-depth explanation of the types of benchmarks and baseline scores,\nplease consult the paper \nBenchmarking Models for De Novo Molecular Design## Leaderboard\n\nSee URL"
] |
93e0e9dd4fd07875ab140b74fb6f7248396abe09
|
# Dataset Card for "corpus_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HydraLM/corpus_1
|
[
"region:us"
] |
2023-09-08T18:27:34+00:00
|
{"dataset_info": {"features": [{"name": "message", "dtype": "string"}, {"name": "message_type", "dtype": "string"}, {"name": "message_id", "dtype": "int64"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "dataset_id", "dtype": "string"}, {"name": "unique_conversation_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5194729893, "num_examples": 6320610}], "download_size": 2478345344, "dataset_size": 5194729893}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T18:39:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "corpus_1"
More Information needed
|
[
"# Dataset Card for \"corpus_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"corpus_1\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"corpus_1\"\n\nMore Information needed"
] |
f3e52c0fb5c3be6406db135143951247a89f9fe5
|
# Dataset Card for "stackoverflowVQA-filtered-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mirzaei2114/stackoverflowVQA-filtered-small
|
[
"task_categories:visual-question-answering",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"code",
"region:us"
] |
2023-09-08T18:59:46+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["visual-question-answering", "question-answering"], "pretty_name": "StackOverflowVQA-filtered-small", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Id", "dtype": "int64"}, {"name": "PostTypeId", "dtype": "int64"}, {"name": "AcceptedAnswerId", "dtype": "int64"}, {"name": "Question", "dtype": "string"}, {"name": "Answer", "dtype": "string"}, {"name": "Image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1279702996.2801416, "num_examples": 18412}, {"name": "test", "num_bytes": 147966346.50829053, "num_examples": 2046}], "download_size": 1288722919, "dataset_size": 1427669342.7884321}, "tags": ["code"]}
|
2023-12-02T17:11:40+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-visual-question-answering #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #code #region-us
|
# Dataset Card for "stackoverflowVQA-filtered-small"
More Information needed
|
[
"# Dataset Card for \"stackoverflowVQA-filtered-small\"\n\nMore Information needed"
] |
[
"TAGS\n#task_categories-visual-question-answering #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #code #region-us \n",
"# Dataset Card for \"stackoverflowVQA-filtered-small\"\n\nMore Information needed"
] |
[
56,
22
] |
[
"passage: TAGS\n#task_categories-visual-question-answering #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #code #region-us \n# Dataset Card for \"stackoverflowVQA-filtered-small\"\n\nMore Information needed"
] |
db6ce4c46f9e53eabca0cc49680bb6c807d66728
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
victor-buhl/COLREGS_test_bank_ALPACA
|
[
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"legal",
"region:us"
] |
2023-09-08T19:40:59+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "tags": ["legal"]}
|
2023-09-08T19:52:59+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-English #legal #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #legal #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
36,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #legal #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2e9cfd7b43a7baf615f1cd4eab0eb65a15f69d09
|
# Machine Identity Spectra Dataset
<img src="https://huggingface.co/datasets/Venafi/Machine-Identity-Spectra/resolve/main/VExperimentalSpectra.svg" alt="Spectra Dataset" width="250">
## Summary
Venafi is excited to release of the Machine Identity Spectra large dataset.
This collection of data contains extracted features from 19m+ certificates discovered over HTTPS (port 443) on the
public internet between July 20 and July 26, 2023.
The features are a combination of X.509 certificate features, RFC5280 compliance checks,
and other attributes intended to be used for clustering, features analysis, and a base for supervised learning tasks (labels not included).
Some rows may contain nan values as well and as such could require additional pre-processing for certain tasks.
This project is part of Venafi Athena. Venafi is committed to enabling the data science community to increase the adoption of machine learning techniques
to identify machine identity threats and solutions.
Phillip Maraveyias at Venafi is the lead researcher for this dataset.
## Data Structure
The extracted features are contained in the Data folder as certificateFeatures.csv.gz. The unarchived data size is
approximately 10GB and contains 98 extracted features for approximately 19m certificates. A description of the features
and expected data types is contained in the base folder as features.csv.
The Data folder also contains a 500k row sample of the data in parquet format. This is displayed in the Data Viewer
for easy visual inspection of the dataset.
## Clustering and PCA Example
To demonstrate a potential use of the data, clustering and Principal Component Analysis (PCA) were
conducted on the binary data features in the dataset. 10 clusters were generated and PCA conducted with the top 3 components preserved.
KMeans clustering was performed to generate a total of 10 clusters. In this case we are primarily
interested in visualizing the data and understanding better how it may be used, so the choice of 10 clusters is mostly
for illustrative purposes.
The top three PCA components accounted for approximately 61%, 10%, and 6% of the total explained variance
(for a total of 77% of the overall data variance). Plots of the first 2 components in 2D space and top 3 components in
3D space grouped into the 10 clusters are shown below.
### Clusters in 2 Dimensions

### Clusters in 3 Dimensions

## Contact
Please contact [email protected] if you have any questions about this dataset.
## References and Acknowledgement
The following papers provided inspiration for this project:
- Li, J.; Zhang, Z.; Guo, C. Machine Learning-Based Malicious X.509 Certificates’ Detection. Appl. Sci. 2021, 11, 2164. https://doi.org/ 10.3390/app11052164
- Liu, J.; Luktarhan, N.; Chang, Y.; Yu, W. Malcertificate: Research and Implementation of a Malicious Certificate Detection Algorithm Based on GCN. Appl. Sci. 2022,12,4440. https://doi.org/ 10.3390/app12094440
|
Venafi/Machine-Identity-Spectra
|
[
"task_categories:feature-extraction",
"size_categories:10M<n<100M",
"language:en",
"license:apache-2.0",
"certificates",
"machine identity",
"security",
"region:us"
] |
2023-09-08T20:15:42+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10M<n<100M"], "task_categories": ["feature-extraction"], "pretty_name": "Machine Identity Spectra Dataset", "tags": ["certificates", "machine identity", "security"], "configs": [{"config_name": "sample_data", "data_files": "Data/CertificateFeatures-sample.parquet"}]}
|
2023-09-17T16:21:41+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-feature-extraction #size_categories-10M<n<100M #language-English #license-apache-2.0 #certificates #machine identity #security #region-us
|
# Machine Identity Spectra Dataset
<img src="URL alt="Spectra Dataset" width="250">
## Summary
Venafi is excited to release of the Machine Identity Spectra large dataset.
This collection of data contains extracted features from 19m+ certificates discovered over HTTPS (port 443) on the
public internet between July 20 and July 26, 2023.
The features are a combination of X.509 certificate features, RFC5280 compliance checks,
and other attributes intended to be used for clustering, features analysis, and a base for supervised learning tasks (labels not included).
Some rows may contain nan values as well and as such could require additional pre-processing for certain tasks.
This project is part of Venafi Athena. Venafi is committed to enabling the data science community to increase the adoption of machine learning techniques
to identify machine identity threats and solutions.
Phillip Maraveyias at Venafi is the lead researcher for this dataset.
## Data Structure
The extracted features are contained in the Data folder as URL. The unarchived data size is
approximately 10GB and contains 98 extracted features for approximately 19m certificates. A description of the features
and expected data types is contained in the base folder as URL.
The Data folder also contains a 500k row sample of the data in parquet format. This is displayed in the Data Viewer
for easy visual inspection of the dataset.
## Clustering and PCA Example
To demonstrate a potential use of the data, clustering and Principal Component Analysis (PCA) were
conducted on the binary data features in the dataset. 10 clusters were generated and PCA conducted with the top 3 components preserved.
KMeans clustering was performed to generate a total of 10 clusters. In this case we are primarily
interested in visualizing the data and understanding better how it may be used, so the choice of 10 clusters is mostly
for illustrative purposes.
The top three PCA components accounted for approximately 61%, 10%, and 6% of the total explained variance
(for a total of 77% of the overall data variance). Plots of the first 2 components in 2D space and top 3 components in
3D space grouped into the 10 clusters are shown below.
### Clusters in 2 Dimensions

### Clusters in 3 Dimensions

## Contact
Please contact athena-community@URL if you have any questions about this dataset.
## References and Acknowledgement
The following papers provided inspiration for this project:
- Li, J.; Zhang, Z.; Guo, C. Machine Learning-Based Malicious X.509 Certificates’ Detection. Appl. Sci. 2021, 11, 2164. URL 10.3390/app11052164
- Liu, J.; Luktarhan, N.; Chang, Y.; Yu, W. Malcertificate: Research and Implementation of a Malicious Certificate Detection Algorithm Based on GCN. Appl. Sci. 2022,12,4440. URL 10.3390/app12094440
|
[
"# Machine Identity Spectra Dataset\n<img src=\"URL alt=\"Spectra Dataset\" width=\"250\">",
"## Summary\nVenafi is excited to release of the Machine Identity Spectra large dataset. \nThis collection of data contains extracted features from 19m+ certificates discovered over HTTPS (port 443) on the \npublic internet between July 20 and July 26, 2023.\nThe features are a combination of X.509 certificate features, RFC5280 compliance checks, \nand other attributes intended to be used for clustering, features analysis, and a base for supervised learning tasks (labels not included).\nSome rows may contain nan values as well and as such could require additional pre-processing for certain tasks.\n\nThis project is part of Venafi Athena. Venafi is committed to enabling the data science community to increase the adoption of machine learning techniques \nto identify machine identity threats and solutions. \nPhillip Maraveyias at Venafi is the lead researcher for this dataset.",
"## Data Structure\nThe extracted features are contained in the Data folder as URL. The unarchived data size is \napproximately 10GB and contains 98 extracted features for approximately 19m certificates. A description of the features\nand expected data types is contained in the base folder as URL.\n\nThe Data folder also contains a 500k row sample of the data in parquet format. This is displayed in the Data Viewer \nfor easy visual inspection of the dataset.",
"## Clustering and PCA Example\n\nTo demonstrate a potential use of the data, clustering and Principal Component Analysis (PCA) were \nconducted on the binary data features in the dataset. 10 clusters were generated and PCA conducted with the top 3 components preserved.\n\nKMeans clustering was performed to generate a total of 10 clusters. In this case we are primarily \ninterested in visualizing the data and understanding better how it may be used, so the choice of 10 clusters is mostly\nfor illustrative purposes.\n\nThe top three PCA components accounted for approximately 61%, 10%, and 6% of the total explained variance\n(for a total of 77% of the overall data variance). Plots of the first 2 components in 2D space and top 3 components in \n3D space grouped into the 10 clusters are shown below.",
"### Clusters in 2 Dimensions\n",
"### Clusters in 3 Dimensions\n",
"## Contact\nPlease contact athena-community@URL if you have any questions about this dataset.",
"## References and Acknowledgement\nThe following papers provided inspiration for this project:\n- Li, J.; Zhang, Z.; Guo, C. Machine Learning-Based Malicious X.509 Certificates’ Detection. Appl. Sci. 2021, 11, 2164. URL 10.3390/app11052164\n- Liu, J.; Luktarhan, N.; Chang, Y.; Yu, W. Malcertificate: Research and Implementation of a Malicious Certificate Detection Algorithm Based on GCN. Appl. Sci. 2022,12,4440. URL 10.3390/app12094440"
] |
[
"TAGS\n#task_categories-feature-extraction #size_categories-10M<n<100M #language-English #license-apache-2.0 #certificates #machine identity #security #region-us \n",
"# Machine Identity Spectra Dataset\n<img src=\"URL alt=\"Spectra Dataset\" width=\"250\">",
"## Summary\nVenafi is excited to release of the Machine Identity Spectra large dataset. \nThis collection of data contains extracted features from 19m+ certificates discovered over HTTPS (port 443) on the \npublic internet between July 20 and July 26, 2023.\nThe features are a combination of X.509 certificate features, RFC5280 compliance checks, \nand other attributes intended to be used for clustering, features analysis, and a base for supervised learning tasks (labels not included).\nSome rows may contain nan values as well and as such could require additional pre-processing for certain tasks.\n\nThis project is part of Venafi Athena. Venafi is committed to enabling the data science community to increase the adoption of machine learning techniques \nto identify machine identity threats and solutions. \nPhillip Maraveyias at Venafi is the lead researcher for this dataset.",
"## Data Structure\nThe extracted features are contained in the Data folder as URL. The unarchived data size is \napproximately 10GB and contains 98 extracted features for approximately 19m certificates. A description of the features\nand expected data types is contained in the base folder as URL.\n\nThe Data folder also contains a 500k row sample of the data in parquet format. This is displayed in the Data Viewer \nfor easy visual inspection of the dataset.",
"## Clustering and PCA Example\n\nTo demonstrate a potential use of the data, clustering and Principal Component Analysis (PCA) were \nconducted on the binary data features in the dataset. 10 clusters were generated and PCA conducted with the top 3 components preserved.\n\nKMeans clustering was performed to generate a total of 10 clusters. In this case we are primarily \ninterested in visualizing the data and understanding better how it may be used, so the choice of 10 clusters is mostly\nfor illustrative purposes.\n\nThe top three PCA components accounted for approximately 61%, 10%, and 6% of the total explained variance\n(for a total of 77% of the overall data variance). Plots of the first 2 components in 2D space and top 3 components in \n3D space grouped into the 10 clusters are shown below.",
"### Clusters in 2 Dimensions\n",
"### Clusters in 3 Dimensions\n",
"## Contact\nPlease contact athena-community@URL if you have any questions about this dataset.",
"## References and Acknowledgement\nThe following papers provided inspiration for this project:\n- Li, J.; Zhang, Z.; Guo, C. Machine Learning-Based Malicious X.509 Certificates’ Detection. Appl. Sci. 2021, 11, 2164. URL 10.3390/app11052164\n- Liu, J.; Luktarhan, N.; Chang, Y.; Yu, W. Malcertificate: Research and Implementation of a Malicious Certificate Detection Algorithm Based on GCN. Appl. Sci. 2022,12,4440. URL 10.3390/app12094440"
] |
[
51,
28,
189,
101,
190,
21,
21,
23,
143
] |
[
"passage: TAGS\n#task_categories-feature-extraction #size_categories-10M<n<100M #language-English #license-apache-2.0 #certificates #machine identity #security #region-us \n# Machine Identity Spectra Dataset\n<img src=\"URL alt=\"Spectra Dataset\" width=\"250\">## Summary\nVenafi is excited to release of the Machine Identity Spectra large dataset. \nThis collection of data contains extracted features from 19m+ certificates discovered over HTTPS (port 443) on the \npublic internet between July 20 and July 26, 2023.\nThe features are a combination of X.509 certificate features, RFC5280 compliance checks, \nand other attributes intended to be used for clustering, features analysis, and a base for supervised learning tasks (labels not included).\nSome rows may contain nan values as well and as such could require additional pre-processing for certain tasks.\n\nThis project is part of Venafi Athena. Venafi is committed to enabling the data science community to increase the adoption of machine learning techniques \nto identify machine identity threats and solutions. \nPhillip Maraveyias at Venafi is the lead researcher for this dataset.## Data Structure\nThe extracted features are contained in the Data folder as URL. The unarchived data size is \napproximately 10GB and contains 98 extracted features for approximately 19m certificates. A description of the features\nand expected data types is contained in the base folder as URL.\n\nThe Data folder also contains a 500k row sample of the data in parquet format. This is displayed in the Data Viewer \nfor easy visual inspection of the dataset."
] |
38c10053efeafd20ab6ff4e08c3ec17de26c19b7
|
# Dataset Card for Calc-MAWPS
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <https://huggingface.co/datasets/omarxadel/MaWPS-ar>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Data splits
We provide 2 variants of the dataset. In the first one, the data splits correspond to the original one and can be loaded using:
```python
datasets.load_dataset("MU-NLPC/calc-mawps", "original-splits")
```
The second one is filtered to prevent data leaks (overly similar examples in train and test/val splits) in between and across datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
Specifically, we filtered out around 2,500 near-duplicates from the train set that were similar to some instances in the MAWPS val and test splits and ASDiv-A test split. You can load this variant via:
```python
datasets.load_dataset("MU-NLPC/calc-mawps")
```
## Attributes:
- **id**: id of the example
- **question**: problem description in English
- **question_arabic**: problem description in Arabic
- **chain**: series of simple operations (derived from **expression**) that lead to the solution
- **result**: the solution for x as a number or fraction (string)
- **result_float**: same as `result` but converted to a float
- **equation**: an equation that needs to be solved for `x` to obtain the result. Usually in the form of "x = ..." but not always.
- **expression**: arithmetic expression derived from `equation` that solves it for `x`
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original MAWPS dataset**](http://lang.ee.washington.edu/MAWPS)
- [**MAWPS dataset variant in Arabic**](https://huggingface.co/datasets/omarxadel/MaWPS-ar)
- [**original MAWPS paper**](https://aclanthology.org/N16-1136/)
- [**original MAWPS repo**](https://github.com/sroy9/mawps)
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of the dataset in research, please cite the original [MAWPS paper](https://aclanthology.org/N16-1136/), and [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
|
MU-NLPC/Calc-mawps
|
[
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"math world problems",
"math",
"arithmetics",
"arxiv:2305.15017",
"region:us"
] |
2023-09-08T20:19:20+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "tags": ["math world problems", "math", "arithmetics"], "dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "chain", "dtype": "string"}, {"name": "result", "dtype": "string"}, {"name": "result_float", "dtype": "float64"}, {"name": "equation", "dtype": "string"}, {"name": "expression", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 298347, "num_examples": 1089}, {"name": "validation", "num_bytes": 285321, "num_examples": 1040}, {"name": "test", "num_bytes": 142648, "num_examples": 520}], "download_size": 0, "dataset_size": 726316}, {"config_name": "original-splits", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "chain", "dtype": "string"}, {"name": "result", "dtype": "string"}, {"name": "result_float", "dtype": "float64"}, {"name": "equation", "dtype": "string"}, {"name": "expression", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1000546, "num_examples": 3636}, {"name": "test", "num_bytes": 142648, "num_examples": 520}, {"name": "validation", "num_bytes": 285321, "num_examples": 1040}], "download_size": 128730, "dataset_size": 1428515}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "original-splits", "data_files": [{"split": "train", "path": "original-splits/train-*"}, {"split": "test", "path": "original-splits/test-*"}, {"split": "validation", "path": "original-splits/validation-*"}]}]}
|
2023-10-30T15:55:30+00:00
|
[
"2305.15017"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #math world problems #math #arithmetics #arxiv-2305.15017 #region-us
|
# Dataset Card for Calc-MAWPS
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL
The main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Data splits
We provide 2 variants of the dataset. In the first one, the data splits correspond to the original one and can be loaded using:
The second one is filtered to prevent data leaks (overly similar examples in train and test/val splits) in between and across datasets in Calc-X collection.
Specifically, we filtered out around 2,500 near-duplicates from the train set that were similar to some instances in the MAWPS val and test splits and ASDiv-A test split. You can load this variant via:
## Attributes:
- id: id of the example
- question: problem description in English
- question_arabic: problem description in Arabic
- chain: series of simple operations (derived from expression) that lead to the solution
- result: the solution for x as a number or fraction (string)
- result_float: same as 'result' but converted to a float
- equation: an equation that needs to be solved for 'x' to obtain the result. Usually in the form of "x = ..." but not always.
- expression: arithmetic expression derived from 'equation' that solves it for 'x'
Attributes id, question, chain, and result are present in all datasets in Calc-X collection.
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- Calc-X collection - datasets for training Calcformers
- Calcformers collection - calculator-using models we trained and published on HF
- Calc-X and Calcformers paper
- Calc-X and Calcformers repo
Here are links to the original dataset:
- original MAWPS dataset
- MAWPS dataset variant in Arabic
- original MAWPS paper
- original MAWPS repo
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of the dataset in research, please cite the original MAWPS paper, and Calc-X paper as follows:
|
[
"# Dataset Card for Calc-MAWPS",
"## Summary\n\nThe dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL\n\nThe main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily\nparsed (e.g. by BeautifulSoup). The data contains 3 types of tags:\n\n- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)\n- output: An output of the external tool\n- result: The final answer to the mathematical problem (a number)",
"## Supported Tasks\n\nThis variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.\nThis dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.",
"## Data splits\n\nWe provide 2 variants of the dataset. In the first one, the data splits correspond to the original one and can be loaded using:\n\n\n\nThe second one is filtered to prevent data leaks (overly similar examples in train and test/val splits) in between and across datasets in Calc-X collection.\nSpecifically, we filtered out around 2,500 near-duplicates from the train set that were similar to some instances in the MAWPS val and test splits and ASDiv-A test split. You can load this variant via:",
"## Attributes:\n\n- id: id of the example\n- question: problem description in English\n- question_arabic: problem description in Arabic\n- chain: series of simple operations (derived from expression) that lead to the solution\n- result: the solution for x as a number or fraction (string)\n- result_float: same as 'result' but converted to a float\n- equation: an equation that needs to be solved for 'x' to obtain the result. Usually in the form of \"x = ...\" but not always.\n- expression: arithmetic expression derived from 'equation' that solves it for 'x'\n\nAttributes id, question, chain, and result are present in all datasets in Calc-X collection.",
"## Related work\n\nThis dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.\n\n- Calc-X collection - datasets for training Calcformers\n- Calcformers collection - calculator-using models we trained and published on HF\n- Calc-X and Calcformers paper\n- Calc-X and Calcformers repo\n\nHere are links to the original dataset:\n\n- original MAWPS dataset\n- MAWPS dataset variant in Arabic\n- original MAWPS paper\n- original MAWPS repo",
"## Licence\n\nMIT, consistent with the original source dataset linked above.",
"## Cite\n\nIf you use this version of the dataset in research, please cite the original MAWPS paper, and Calc-X paper as follows:"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #math world problems #math #arithmetics #arxiv-2305.15017 #region-us \n",
"# Dataset Card for Calc-MAWPS",
"## Summary\n\nThe dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL\n\nThe main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily\nparsed (e.g. by BeautifulSoup). The data contains 3 types of tags:\n\n- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)\n- output: An output of the external tool\n- result: The final answer to the mathematical problem (a number)",
"## Supported Tasks\n\nThis variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.\nThis dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.",
"## Data splits\n\nWe provide 2 variants of the dataset. In the first one, the data splits correspond to the original one and can be loaded using:\n\n\n\nThe second one is filtered to prevent data leaks (overly similar examples in train and test/val splits) in between and across datasets in Calc-X collection.\nSpecifically, we filtered out around 2,500 near-duplicates from the train set that were similar to some instances in the MAWPS val and test splits and ASDiv-A test split. You can load this variant via:",
"## Attributes:\n\n- id: id of the example\n- question: problem description in English\n- question_arabic: problem description in Arabic\n- chain: series of simple operations (derived from expression) that lead to the solution\n- result: the solution for x as a number or fraction (string)\n- result_float: same as 'result' but converted to a float\n- equation: an equation that needs to be solved for 'x' to obtain the result. Usually in the form of \"x = ...\" but not always.\n- expression: arithmetic expression derived from 'equation' that solves it for 'x'\n\nAttributes id, question, chain, and result are present in all datasets in Calc-X collection.",
"## Related work\n\nThis dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.\n\n- Calc-X collection - datasets for training Calcformers\n- Calcformers collection - calculator-using models we trained and published on HF\n- Calc-X and Calcformers paper\n- Calc-X and Calcformers repo\n\nHere are links to the original dataset:\n\n- original MAWPS dataset\n- MAWPS dataset variant in Arabic\n- original MAWPS paper\n- original MAWPS repo",
"## Licence\n\nMIT, consistent with the original source dataset linked above.",
"## Cite\n\nIf you use this version of the dataset in research, please cite the original MAWPS paper, and Calc-X paper as follows:"
] |
[
58,
11,
140,
70,
127,
172,
130,
16,
34
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #math world problems #math #arithmetics #arxiv-2305.15017 #region-us \n# Dataset Card for Calc-MAWPS## Summary\n\nThe dataset is a collection of simple math word problems focused on arithmetics. It is derived from <URL\n\nThe main addition in this dataset variant is the 'chain' column. It was created by converting the solution to a simple html-like language that can be easily\nparsed (e.g. by BeautifulSoup). The data contains 3 types of tags:\n\n- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)\n- output: An output of the external tool\n- result: The final answer to the mathematical problem (a number)## Supported Tasks\n\nThis variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.\nThis dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.## Data splits\n\nWe provide 2 variants of the dataset. In the first one, the data splits correspond to the original one and can be loaded using:\n\n\n\nThe second one is filtered to prevent data leaks (overly similar examples in train and test/val splits) in between and across datasets in Calc-X collection.\nSpecifically, we filtered out around 2,500 near-duplicates from the train set that were similar to some instances in the MAWPS val and test splits and ASDiv-A test split. You can load this variant via:"
] |
173ae356c12f5c818c70b9033f63e3ab2a0ecdcf
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
victor-buhl/COLREGS_ALPACA_SHORT
|
[
"task_categories:question-answering",
"language:en",
"legal",
"region:us"
] |
2023-09-08T20:29:02+00:00
|
{"language": ["en"], "task_categories": ["question-answering"], "tags": ["legal"]}
|
2023-09-08T20:58:50+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #language-English #legal #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-question-answering #language-English #legal #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
24,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-question-answering #language-English #legal #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0211faa155a737d7a59b053bfec547eca3fe4b38
|
# Dataset Card for "v"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Minglii/v
|
[
"region:us"
] |
2023-09-08T21:58:54+00:00
|
{"dataset_info": {"features": [{"name": "data", "struct": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "markdown", "struct": [{"name": "answer", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "type", "dtype": "string"}]}, {"name": "text", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 644558921, "num_examples": 117213}], "download_size": 262396682, "dataset_size": 644558921}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-08T22:27:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "v"
More Information needed
|
[
"# Dataset Card for \"v\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"v\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"v\"\n\nMore Information needed"
] |
7fd61f4b53fa3569b1308f05c519da163efba557
|
Data from `https://zenodo.org/record/5851729`, dataset `comments_2017-02.bz2`
In format of:
`score: {score of post}\n{post}`
Encoded using the tokenizer from:
`gmongaras/wizardLM-7B-HF-8bit`
|
gmongaras/reddit_political_2019
|
[
"region:us"
] |
2023-09-08T22:06:26+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 10851489936, "num_examples": 6474636}], "download_size": 1146425292, "dataset_size": 10851489936}}
|
2023-09-10T14:25:41+00:00
|
[] |
[] |
TAGS
#region-us
|
Data from 'URL dataset 'comments_2017-02.bz2'
In format of:
'score: {score of post}\n{post}'
Encoded using the tokenizer from:
'gmongaras/wizardLM-7B-HF-8bit'
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
d7be74439ea670ae928993b6bda43ce60368ec9f
|
# Vigil: LLM Jailbreak all-MiniLM-L6-v2
- **Repo:** [github.com/deadbits/vigil-llm](https://github.com/deadbits/vigil-llm)
`Vigil` is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains `all-MiniLM-L6-v2` embeddings for all "jailbreak" prompts used by [Vigil](https://github.com/deadbits/pvigil-llm).
You can use the [parquet2vdb.py](https://github.com/deadbits/vigil-llm/blob/main/vigil/utils/parquet2vdb.py) utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
```json
[
{
"text": str,
"embedding": [],
"model": "all-MiniLM-L6-v2"
}
}
]
```
Jailbreak prompts sourced from: https://github.com/laiyer-ai/llm-guard/blob/399cb2eea70afc78482db226253ddd1d85f296e3/llm_guard/resources/jailbreak.json
|
deadbits/vigil-jailbreak-all-MiniLM-L6-v2
|
[
"embeddings",
"text",
"security",
"region:us"
] |
2023-09-08T22:24:15+00:00
|
{"pretty_name": "Vigil: LLM Jailbreak all-MiniLM-L6-v2", "tags": ["embeddings", "text", "security"]}
|
2023-09-09T01:39:20+00:00
|
[] |
[] |
TAGS
#embeddings #text #security #region-us
|
# Vigil: LLM Jailbreak all-MiniLM-L6-v2
- Repo: URL
'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains 'all-MiniLM-L6-v2' embeddings for all "jailbreak" prompts used by Vigil.
You can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
Jailbreak prompts sourced from: URL
|
[
"# Vigil: LLM Jailbreak all-MiniLM-L6-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-MiniLM-L6-v2' embeddings for all \"jailbreak\" prompts used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nJailbreak prompts sourced from: URL"
] |
[
"TAGS\n#embeddings #text #security #region-us \n",
"# Vigil: LLM Jailbreak all-MiniLM-L6-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-MiniLM-L6-v2' embeddings for all \"jailbreak\" prompts used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nJailbreak prompts sourced from: URL"
] |
[
14,
141,
12
] |
[
"passage: TAGS\n#embeddings #text #security #region-us \n# Vigil: LLM Jailbreak all-MiniLM-L6-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-MiniLM-L6-v2' embeddings for all \"jailbreak\" prompts used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.## Format\n\n\nJailbreak prompts sourced from: URL"
] |
62e7108031e9d9a3e2a9fe9be5fcadf02a84cbb5
|
# Vigil: LLM Instruction Bypass all-MiniLM-L6-v2
- **Repo:** [github.com/deadbits/vigil-llm](https://github.com/deadbits/vigil-llm)
`Vigil` is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains `all-MiniLM-L6-v2` embeddings for all Instruction Bypass style prompts ("Ignore instructions ...") used by [Vigil](https://github.com/deadbits/prompt-injection-defense).
You can use the [parquet2vdb.py](https://github.com/deadbits/prompt-injection-defense/blob/main/vigil/utils/parquet2vdb.py) utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
```json
[
{
"text": str,
"embedding": [],
"model": "all-MiniLM-L6-v2"
}
]
```
Instruction bypass prompts generated with: https://gist.github.com/deadbits/e93a90aa36c9aa7b5ce1179597a6fe3d#file-generate-phrases-py
|
deadbits/vigil-instruction-bypass-all-MiniLM-L6-v2
|
[
"embeddings",
"text",
"security",
"region:us"
] |
2023-09-08T22:43:52+00:00
|
{"pretty_name": "Vigil: LLM Instruction Bypass all-MiniLM-L6-v2 ", "tags": ["embeddings", "text", "security"]}
|
2023-09-09T01:40:35+00:00
|
[] |
[] |
TAGS
#embeddings #text #security #region-us
|
# Vigil: LLM Instruction Bypass all-MiniLM-L6-v2
- Repo: URL
'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains 'all-MiniLM-L6-v2' embeddings for all Instruction Bypass style prompts ("Ignore instructions ...") used by Vigil.
You can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
Instruction bypass prompts generated with: URL
|
[
"# Vigil: LLM Instruction Bypass all-MiniLM-L6-v2 \n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-MiniLM-L6-v2' embeddings for all Instruction Bypass style prompts (\"Ignore instructions ...\") used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nInstruction bypass prompts generated with: URL"
] |
[
"TAGS\n#embeddings #text #security #region-us \n",
"# Vigil: LLM Instruction Bypass all-MiniLM-L6-v2 \n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-MiniLM-L6-v2' embeddings for all Instruction Bypass style prompts (\"Ignore instructions ...\") used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nInstruction bypass prompts generated with: URL"
] |
[
14,
149,
13
] |
[
"passage: TAGS\n#embeddings #text #security #region-us \n# Vigil: LLM Instruction Bypass all-MiniLM-L6-v2 \n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-MiniLM-L6-v2' embeddings for all Instruction Bypass style prompts (\"Ignore instructions ...\") used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.## Format\n\n\nInstruction bypass prompts generated with: URL"
] |
72963e8bf63f39f87c22d81a7e18811478a73b1b
|
# Vigil: LLM Jailbreak all-mpnet-base-v2
- **Repo:** [github.com/deadbits/vigil-llm](https://github.com/deadbits/vigil-llm)
`Vigil` is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains `all-mpnet-base-v2` embeddings for all "jailbreak" prompts used by [Vigil](https://github.com/deadbits/pvigil-llm).
You can use the [parquet2vdb.py](https://github.com/deadbits/vigil-llm/blob/main/vigil/utils/parquet2vdb.py) utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
```json
[
{
"text": str,
"embedding": [],
"model": "all-mpnet-base-v2"
}
}
]
```
Jailbreak prompts sourced from: https://github.com/laiyer-ai/llm-guard/blob/399cb2eea70afc78482db226253ddd1d85f296e3/llm_guard/resources/jailbreak.json
|
deadbits/vigil-jailbreak-all-mpnet-base-v2
|
[
"embeddings",
"text",
"security",
"region:us"
] |
2023-09-08T22:47:38+00:00
|
{"pretty_name": "Vigil: LLM Jailbreak all-mpnet-base-v2", "tags": ["embeddings", "text", "security"]}
|
2023-09-09T01:40:17+00:00
|
[] |
[] |
TAGS
#embeddings #text #security #region-us
|
# Vigil: LLM Jailbreak all-mpnet-base-v2
- Repo: URL
'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains 'all-mpnet-base-v2' embeddings for all "jailbreak" prompts used by Vigil.
You can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
Jailbreak prompts sourced from: URL
|
[
"# Vigil: LLM Jailbreak all-mpnet-base-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-mpnet-base-v2' embeddings for all \"jailbreak\" prompts used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nJailbreak prompts sourced from: URL"
] |
[
"TAGS\n#embeddings #text #security #region-us \n",
"# Vigil: LLM Jailbreak all-mpnet-base-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-mpnet-base-v2' embeddings for all \"jailbreak\" prompts used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nJailbreak prompts sourced from: URL"
] |
[
14,
139,
12
] |
[
"passage: TAGS\n#embeddings #text #security #region-us \n# Vigil: LLM Jailbreak all-mpnet-base-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-mpnet-base-v2' embeddings for all \"jailbreak\" prompts used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.## Format\n\n\nJailbreak prompts sourced from: URL"
] |
e9205bdcbf038d31d15450d59eafde327df4e28c
|
# Vigil: LLM Instruction Bypass all-mpnet-base-v2
- **Repo:** [github.com/deadbits/vigil-llm](https://github.com/deadbits/vigil-llm)
`Vigil` is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains `all-mpnet-base-v2` embeddings for all Instruction Bypass style prompts ("Ignore instructions ...") used by [Vigil](https://github.com/deadbits/prompt-injection-defense).
You can use the [parquet2vdb.py](https://github.com/deadbits/prompt-injection-defense/blob/main/vigil/utils/parquet2vdb.py) utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
```json
[
{
"text": str,
"embedding": [],
"model": "all-mpnet-base-v2"
}
]
```
Instruction bypass prompts generated with: https://gist.github.com/deadbits/e93a90aa36c9aa7b5ce1179597a6fe3d#file-generate-phrases-py
|
deadbits/vigil-instruction-bypass-all-mpnet-base-v2
|
[
"embeddings",
"text",
"security",
"region:us"
] |
2023-09-08T22:51:12+00:00
|
{"pretty_name": "Vigil: LLM Instruction Bypass all-mpnet-base-v2", "tags": ["embeddings", "text", "security"]}
|
2023-09-09T01:40:03+00:00
|
[] |
[] |
TAGS
#embeddings #text #security #region-us
|
# Vigil: LLM Instruction Bypass all-mpnet-base-v2
- Repo: URL
'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.
This repository contains 'all-mpnet-base-v2' embeddings for all Instruction Bypass style prompts ("Ignore instructions ...") used by Vigil.
You can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.
## Format
Instruction bypass prompts generated with: URL
|
[
"# Vigil: LLM Instruction Bypass all-mpnet-base-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-mpnet-base-v2' embeddings for all Instruction Bypass style prompts (\"Ignore instructions ...\") used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nInstruction bypass prompts generated with: URL"
] |
[
"TAGS\n#embeddings #text #security #region-us \n",
"# Vigil: LLM Instruction Bypass all-mpnet-base-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-mpnet-base-v2' embeddings for all Instruction Bypass style prompts (\"Ignore instructions ...\") used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.",
"## Format\n\n\nInstruction bypass prompts generated with: URL"
] |
[
14,
147,
13
] |
[
"passage: TAGS\n#embeddings #text #security #region-us \n# Vigil: LLM Instruction Bypass all-mpnet-base-v2\n- Repo: URL\n\n'Vigil' is a Python framework and REST API for assessing Large Language Model (LLM) prompts against a set of scanners to detect prompt injections, jailbreaks, and other potentially risky inputs.\n\nThis repository contains 'all-mpnet-base-v2' embeddings for all Instruction Bypass style prompts (\"Ignore instructions ...\") used by Vigil.\n\nYou can use the URL utility to load the embeddings in the Vigil chromadb instance, or use them in your own application.## Format\n\n\nInstruction bypass prompts generated with: URL"
] |
fe5ace6d3edb8568b6a4f608a460d3f7aef7bc0b
|
### Dataset Description:
This is a split version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) that divides data based on its sources.
The content of this dataset is the same as SlimPajama-627B.
We divide data from different sources based on the "redpajama_setname" and save them in different directories, which is convenient for future dataset combination related research.
This dataset consists of 15,967 jsonl files and is ~ 883G compressed.
### Primary Usage:
This dataset is used for our study: [SlimPajama-DC: Understanding Data Combinations for LLM Training](https://arxiv.org/abs/2309.10818).
For more details about the content in this dataset, please refer to the original [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
### License:
Please refer to the licenses of the data subsets you use.
- [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
- [C4 license](https://huggingface.co/datasets/allenai/c4#license)
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
- [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
- [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
- [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
|
MBZUAI-LLM/SlimPajama-627B-DC
|
[
"task_categories:text-generation",
"language:en",
"license:mit",
"arxiv:2309.10818",
"region:us"
] |
2023-09-08T22:58:27+00:00
|
{"language": ["en"], "license": "mit", "task_categories": ["text-generation"], "pretty_name": "SlimPajama-627B-divided"}
|
2023-09-20T05:26:19+00:00
|
[
"2309.10818"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #language-English #license-mit #arxiv-2309.10818 #region-us
|
### Dataset Description:
This is a split version of cerebras/SlimPajama-627B that divides data based on its sources.
The content of this dataset is the same as SlimPajama-627B.
We divide data from different sources based on the "redpajama_setname" and save them in different directories, which is convenient for future dataset combination related research.
This dataset consists of 15,967 jsonl files and is ~ 883G compressed.
### Primary Usage:
This dataset is used for our study: SlimPajama-DC: Understanding Data Combinations for LLM Training.
For more details about the content in this dataset, please refer to the original cerebras/SlimPajama-627B.
### License:
Please refer to the licenses of the data subsets you use.
- Common Crawl Foundation Terms of Use
- C4 license
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: the_pile_books3 license and pg19 license
- ArXiv Terms of Use
- Wikipedia License
- StackExchange license on the Internet Archive
|
[
"### Dataset Description:\nThis is a split version of cerebras/SlimPajama-627B that divides data based on its sources. \nThe content of this dataset is the same as SlimPajama-627B.\nWe divide data from different sources based on the \"redpajama_setname\" and save them in different directories, which is convenient for future dataset combination related research.\n\nThis dataset consists of 15,967 jsonl files and is ~ 883G compressed.",
"### Primary Usage:\n\nThis dataset is used for our study: SlimPajama-DC: Understanding Data Combinations for LLM Training.\n\nFor more details about the content in this dataset, please refer to the original cerebras/SlimPajama-627B.",
"### License:\nPlease refer to the licenses of the data subsets you use.\n- Common Crawl Foundation Terms of Use\n- C4 license\n- GitHub was limited to MIT, BSD, or Apache licenses only\n- Books: the_pile_books3 license and pg19 license\n- ArXiv Terms of Use\n- Wikipedia License\n- StackExchange license on the Internet Archive"
] |
[
"TAGS\n#task_categories-text-generation #language-English #license-mit #arxiv-2309.10818 #region-us \n",
"### Dataset Description:\nThis is a split version of cerebras/SlimPajama-627B that divides data based on its sources. \nThe content of this dataset is the same as SlimPajama-627B.\nWe divide data from different sources based on the \"redpajama_setname\" and save them in different directories, which is convenient for future dataset combination related research.\n\nThis dataset consists of 15,967 jsonl files and is ~ 883G compressed.",
"### Primary Usage:\n\nThis dataset is used for our study: SlimPajama-DC: Understanding Data Combinations for LLM Training.\n\nFor more details about the content in this dataset, please refer to the original cerebras/SlimPajama-627B.",
"### License:\nPlease refer to the licenses of the data subsets you use.\n- Common Crawl Foundation Terms of Use\n- C4 license\n- GitHub was limited to MIT, BSD, or Apache licenses only\n- Books: the_pile_books3 license and pg19 license\n- ArXiv Terms of Use\n- Wikipedia License\n- StackExchange license on the Internet Archive"
] |
[
34,
107,
60,
85
] |
[
"passage: TAGS\n#task_categories-text-generation #language-English #license-mit #arxiv-2309.10818 #region-us \n### Dataset Description:\nThis is a split version of cerebras/SlimPajama-627B that divides data based on its sources. \nThe content of this dataset is the same as SlimPajama-627B.\nWe divide data from different sources based on the \"redpajama_setname\" and save them in different directories, which is convenient for future dataset combination related research.\n\nThis dataset consists of 15,967 jsonl files and is ~ 883G compressed.### Primary Usage:\n\nThis dataset is used for our study: SlimPajama-DC: Understanding Data Combinations for LLM Training.\n\nFor more details about the content in this dataset, please refer to the original cerebras/SlimPajama-627B.### License:\nPlease refer to the licenses of the data subsets you use.\n- Common Crawl Foundation Terms of Use\n- C4 license\n- GitHub was limited to MIT, BSD, or Apache licenses only\n- Books: the_pile_books3 license and pg19 license\n- ArXiv Terms of Use\n- Wikipedia License\n- StackExchange license on the Internet Archive"
] |
fd1fb662a2ce89fc608c11f6a4c996d88040d1e0
|
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rdarisi/guanaco-llama2-1k
|
[
"region:us"
] |
2023-09-09T00:02:21+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 966693, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-09T00:02:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-llama2-1k"
More Information needed
|
[
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.