sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
810f47e893b002376104525db898ca4d2dcdcdb3 | # Dataset Card for "MyPubChem10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/MyPubChem10 | [
"region:us"
]
| 2023-10-31T13:02:30+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1482327.0, "num_examples": 9000}, {"name": "validation", "num_bytes": 164703.0, "num_examples": 1000}], "download_size": 514907, "dataset_size": 1647030.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-10-31T13:03:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "MyPubChem10"
More Information needed | [
"# Dataset Card for \"MyPubChem10\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"MyPubChem10\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"MyPubChem10\"\n\nMore Information needed"
]
|
6a8ea150826384d8ce8581d0222daab2286ae15b | # Dataset Card for "medflex"
dataset = load_dataset("kosta-naumenko/medflex", split='train', download_mode='force_redownload', verification_mode='no_checks')
'tokens' - список списков слов предложений (is_split_into_words=True при токенизации)
'ner_tags' - список списков классов слов
- 0 - не симптом
- 1 - начало симптома
- 2 - продолжение симптома
Пример дальнейшей обработки - https://huggingface.co/learn/nlp-course/chapter7/2
| kosta-naumenko/medflex | [
"region:us"
]
| 2023-10-31T13:14:13+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2574069, "num_examples": 1934}], "download_size": 314783, "dataset_size": 2574069}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T12:38:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "medflex"
dataset = load_dataset("kosta-naumenko/medflex", split='train', download_mode='force_redownload', verification_mode='no_checks')
'tokens' - список списков слов предложений (is_split_into_words=True при токенизации)
'ner_tags' - список списков классов слов
- 0 - не симптом
- 1 - начало симптома
- 2 - продолжение симптома
Пример дальнейшей обработки - URL
| [
"# Dataset Card for \"medflex\"\ndataset = load_dataset(\"kosta-naumenko/medflex\", split='train', download_mode='force_redownload', verification_mode='no_checks')\n\n'tokens' - список списков слов предложений (is_split_into_words=True при токенизации)\n\n'ner_tags' - список списков классов слов\n\n\n- 0 - не симптом\n- 1 - начало симптома\n- 2 - продолжение симптома\n\nПример дальнейшей обработки - URL"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"medflex\"\ndataset = load_dataset(\"kosta-naumenko/medflex\", split='train', download_mode='force_redownload', verification_mode='no_checks')\n\n'tokens' - список списков слов предложений (is_split_into_words=True при токенизации)\n\n'ner_tags' - список списков классов слов\n\n\n- 0 - не симптом\n- 1 - начало симптома\n- 2 - продолжение симптома\n\nПример дальнейшей обработки - URL"
]
| [
6,
124
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"medflex\"\ndataset = load_dataset(\"kosta-naumenko/medflex\", split='train', download_mode='force_redownload', verification_mode='no_checks')\n\n'tokens' - список списков слов предложений (is_split_into_words=True при токенизации)\n\n'ner_tags' - список списков классов слов\n\n\n- 0 - не симптом\n- 1 - начало симптома\n- 2 - продолжение симптома\n\nПример дальнейшей обработки - URL"
]
|
267adba472d8d85cd78568410cdb66381331b3fc | # Dataset Card for "fc49f34a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/fc49f34a | [
"region:us"
]
| 2023-10-31T13:15:13+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 233, "num_examples": 10}], "download_size": 1394, "dataset_size": 233}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T13:15:14+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fc49f34a"
More Information needed | [
"# Dataset Card for \"fc49f34a\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fc49f34a\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fc49f34a\"\n\nMore Information needed"
]
|
153838a217e46caf8f444cf4a56ca4a6aa8c3140 | # ML4SE23_G1_EvolInstruct-SCoT-1k
EvolInstruct enhanced 1k entries dataset with Structured-Chain-of-Thought | AISE-TUDelft/ML4SE23_G1_EvolInstruct-SCoT-1k | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"code",
"region:us"
]
| 2023-10-31T13:22:58+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "pretty_name": "EvolInstruct enhanced 1k entries dataset with Structured-Chain-of-Thought", "tags": ["code"]} | 2023-10-31T13:24:09+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-English #code #region-us
| # ML4SE23_G1_EvolInstruct-SCoT-1k
EvolInstruct enhanced 1k entries dataset with Structured-Chain-of-Thought | [
"# ML4SE23_G1_EvolInstruct-SCoT-1k \n\nEvolInstruct enhanced 1k entries dataset with Structured-Chain-of-Thought"
]
| [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #code #region-us \n",
"# ML4SE23_G1_EvolInstruct-SCoT-1k \n\nEvolInstruct enhanced 1k entries dataset with Structured-Chain-of-Thought"
]
| [
35,
45
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #code #region-us \n# ML4SE23_G1_EvolInstruct-SCoT-1k \n\nEvolInstruct enhanced 1k entries dataset with Structured-Chain-of-Thought"
]
|
6b512118adb60dddaaa06f441e3b31002e7eeb02 | # ML4SE23_G1_HumanEval-SCoT
HumanEval dataset enhanced with Structured-Chain-of-Thought | AISE-TUDelft/ML4SE23_G1_HumanEval-SCoT | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"code",
"region:us"
]
| 2023-10-31T13:27:26+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "HumanEval dataset enhanced with Structured-Chain-of-Thought", "tags": ["code"]} | 2023-10-31T13:28:52+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us
| # ML4SE23_G1_HumanEval-SCoT
HumanEval dataset enhanced with Structured-Chain-of-Thought | [
"# ML4SE23_G1_HumanEval-SCoT\n\nHumanEval dataset enhanced with Structured-Chain-of-Thought"
]
| [
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us \n",
"# ML4SE23_G1_HumanEval-SCoT\n\nHumanEval dataset enhanced with Structured-Chain-of-Thought"
]
| [
33,
37
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us \n# ML4SE23_G1_HumanEval-SCoT\n\nHumanEval dataset enhanced with Structured-Chain-of-Thought"
]
|
e207def5646ac4f4b8d3fad7b0862c80cea7a31b | # ML4SE23_G1_MBPP-SCoT
MBPP enhanced dataset with Structured-Chain-of-Thought | AISE-TUDelft/ML4SE23_G1_MBPP-SCoT | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"code",
"region:us"
]
| 2023-10-31T13:30:41+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "MBPP enhanced dataset with Structured-Chain-of-Thought", "tags": ["code"]} | 2023-10-31T13:31:41+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us
| # ML4SE23_G1_MBPP-SCoT
MBPP enhanced dataset with Structured-Chain-of-Thought | [
"# ML4SE23_G1_MBPP-SCoT\n\nMBPP enhanced dataset with Structured-Chain-of-Thought"
]
| [
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us \n",
"# ML4SE23_G1_MBPP-SCoT\n\nMBPP enhanced dataset with Structured-Chain-of-Thought"
]
| [
33,
35
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us \n# ML4SE23_G1_MBPP-SCoT\n\nMBPP enhanced dataset with Structured-Chain-of-Thought"
]
|
e18f81318d8ccc5fb221e6c4e9c9c988827ddde9 | # ML4SE23_G1_MBCPP-SCoT
MBCPP enhanced dataset with Structured-Chain-of-Thought | AISE-TUDelft/ML4SE23_G1_MBCPP-SCoT | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"code",
"region:us"
]
| 2023-10-31T13:32:13+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "MBCPP enhanced dataset with Structured-Chain-of-Thought", "tags": ["code"]} | 2023-10-31T13:33:04+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us
| # ML4SE23_G1_MBCPP-SCoT
MBCPP enhanced dataset with Structured-Chain-of-Thought | [
"# ML4SE23_G1_MBCPP-SCoT\n\nMBCPP enhanced dataset with Structured-Chain-of-Thought"
]
| [
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us \n",
"# ML4SE23_G1_MBCPP-SCoT\n\nMBCPP enhanced dataset with Structured-Chain-of-Thought"
]
| [
33,
36
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #code #region-us \n# ML4SE23_G1_MBCPP-SCoT\n\nMBCPP enhanced dataset with Structured-Chain-of-Thought"
]
|
16e4092c986940d9e32a6e94d0ccbe4285e1e325 | # Dataset Card for "autotrain-data-qozf-4adi-9pul"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mhmtcrkglu/autotrain-data-qozf-4adi-9pul | [
"region:us"
]
| 2023-10-31T13:35:18+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "null"}, {"name": "input", "dtype": "null"}, {"name": "output", "dtype": "null"}, {"name": "autotrain_text", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}, {"name": "validation", "num_bytes": 0, "num_examples": 0}], "download_size": 2318, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-10-31T13:35:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "autotrain-data-qozf-4adi-9pul"
More Information needed | [
"# Dataset Card for \"autotrain-data-qozf-4adi-9pul\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-qozf-4adi-9pul\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-qozf-4adi-9pul\"\n\nMore Information needed"
]
|
cdb941e193dd4c15b0c8c4a43f91e64cdc620baf | # Dataset Card for "find_first_sent_train_50_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_first_sent_train_50_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T13:38:22+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 222236, "num_examples": 170}, {"name": "validation", "num_bytes": 9027, "num_examples": 10}], "download_size": 79508, "dataset_size": 231263}} | 2023-10-31T14:58:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_first_sent_train_50_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_first_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_first_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
30
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_first_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
|
28b2473fd4716c3c66f4ae55c2fadee88aad7fc4 | # Dataset Card for "find_second_sent_train_50_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_second_sent_train_50_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T13:38:29+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 220505, "num_examples": 170}, {"name": "validation", "num_bytes": 9071, "num_examples": 10}], "download_size": 92636, "dataset_size": 229576}} | 2023-10-31T14:58:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_second_sent_train_50_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_second_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_second_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_second_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
|
1bf78aaa31f491514259a1171ba815a488128765 | # Dataset Card for "find_last_sent_train_50_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_last_sent_train_50_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T13:38:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 220781, "num_examples": 170}, {"name": "validation", "num_bytes": 8961, "num_examples": 10}], "download_size": 102996, "dataset_size": 229742}} | 2023-10-31T14:58:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_last_sent_train_50_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_last_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_last_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_last_sent_train_50_eval_10_sentbefore\"\n\nMore Information needed"
]
|
18c33f6c0a6e8c368ef161b80b74f3da4aee2357 | # Dataset Card for "apt_pretrain_textbook_16k-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | communityai/apt_pretrain_textbook_16k-1k | [
"region:us"
]
| 2023-10-31T13:41:43+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101687189.03313944, "num_examples": 1000}], "download_size": 51289141, "dataset_size": 101687189.03313944}} | 2023-10-31T13:41:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "apt_pretrain_textbook_16k-1k"
More Information needed | [
"# Dataset Card for \"apt_pretrain_textbook_16k-1k\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"apt_pretrain_textbook_16k-1k\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"apt_pretrain_textbook_16k-1k\"\n\nMore Information needed"
]
|
9fd0a56557783ed9b04f023cf68782882d3d4c14 | # Dataset Card for "empty_function_kaggle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liyongsea/empty_function_kaggle | [
"region:us"
]
| 2023-10-31T13:45:38+00:00 | {"dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "local_path", "dtype": "string"}, {"name": "kaggle_dataset_name", "dtype": "string"}, {"name": "kaggle_dataset_owner", "dtype": "string"}, {"name": "kversion", "dtype": "string"}, {"name": "kversion_datasetsources", "dtype": "string"}, {"name": "dataset_versions", "dtype": "string"}, {"name": "datasets", "dtype": "string"}, {"name": "users", "dtype": "string"}, {"name": "script", "dtype": "string"}, {"name": "df_info", "dtype": "string"}, {"name": "has_data_info", "dtype": "bool"}, {"name": "nb_filenames", "dtype": "int64"}, {"name": "retreived_data_description", "dtype": "string"}, {"name": "script_nb_tokens", "dtype": "int64"}, {"name": "upvotes", "dtype": "int64"}, {"name": "tokens_description", "dtype": "int64"}, {"name": "tokens_script", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1895686.5998786655, "num_examples": 84}], "download_size": 1763341, "dataset_size": 1895686.5998786655}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T13:46:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "empty_function_kaggle"
More Information needed | [
"# Dataset Card for \"empty_function_kaggle\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"empty_function_kaggle\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"empty_function_kaggle\"\n\nMore Information needed"
]
|
03ab716e1b83b04b84a472bfe9e45dbe63a78958 | # Dataset Card for "llm-MIDI3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | youyu0105/llm-MIDI3 | [
"region:us"
]
| 2023-10-31T13:45:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 559354, "num_examples": 248}], "download_size": 135879, "dataset_size": 559354}} | 2023-10-31T13:45:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "llm-MIDI3"
More Information needed | [
"# Dataset Card for \"llm-MIDI3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"llm-MIDI3\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"llm-MIDI3\"\n\nMore Information needed"
]
|
d4a6328b47ca00fdd9535686e3bb2f8d3e1f8da2 | * 文档和问答对都来自 [Multi-Doc-QA-Chinese](https://huggingface.co/datasets/yuyijiong/Multi-Doc-QA-Chinese),通过随机抽取和组合形成多轮问答形式。
* 推荐直接使用原始数据集[Multi-Doc-QA-Chinese](https://huggingface.co/datasets/yuyijiong/Multi-Doc-QA-Chinese)自己生成指令微调数据,可以控制参考文档和问答的数量
* 经过随机组合,每条数据形成了 20-60个参考文档 + 10个问答对的形式
* chat格式为[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) | yuyijiong/Multi-Doc-Multi-QA-Chinese | [
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-10-31T13:49:23+00:00 | {"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"]} | 2023-11-22T08:20:23+00:00 | []
| [
"zh"
]
| TAGS
#size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-4.0 #region-us
| * 文档和问答对都来自 Multi-Doc-QA-Chinese,通过随机抽取和组合形成多轮问答形式。
* 推荐直接使用原始数据集Multi-Doc-QA-Chinese自己生成指令微调数据,可以控制参考文档和问答的数量
* 经过随机组合,每条数据形成了 20-60个参考文档 + 10个问答对的形式
* chat格式为chatml | []
| [
"TAGS\n#size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-4.0 #region-us \n"
]
| [
34
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #language-Chinese #license-cc-by-nc-4.0 #region-us \n"
]
|
5d220fe7dad887162bfd0e5eec75fc016410788a | # Dataset Card for "llm-MIDI4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | youyu0105/llm-MIDI4 | [
"region:us"
]
| 2023-10-31T13:55:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 570535, "num_examples": 335}], "download_size": 131987, "dataset_size": 570535}} | 2023-10-31T13:55:47+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "llm-MIDI4"
More Information needed | [
"# Dataset Card for \"llm-MIDI4\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"llm-MIDI4\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"llm-MIDI4\"\n\nMore Information needed"
]
|
15c93a0b66782812b6144635809b95005741b6e7 | # Dataset Card for "ms-marco-ai-text-generation-20k-40k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rajendrabaskota/ms-marco-ai-text-generation-20k-40k | [
"region:us"
]
| 2023-10-31T14:13:08+00:00 | {"dataset_info": {"features": [{"name": "answers", "sequence": "string"}, {"name": "passages", "struct": [{"name": "is_selected", "sequence": "int32"}, {"name": "passage_text", "sequence": "string"}, {"name": "url", "sequence": "string"}]}, {"name": "query", "dtype": "string"}, {"name": "query_id", "dtype": "int32"}, {"name": "query_type", "dtype": "string"}, {"name": "wellFormedAnswers", "sequence": "null"}, {"name": "ai_answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 85546820, "num_examples": 20000}], "download_size": 42764498, "dataset_size": 85546820}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T15:35:21+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ms-marco-ai-text-generation-20k-40k"
More Information needed | [
"# Dataset Card for \"ms-marco-ai-text-generation-20k-40k\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ms-marco-ai-text-generation-20k-40k\"\n\nMore Information needed"
]
| [
6,
25
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ms-marco-ai-text-generation-20k-40k\"\n\nMore Information needed"
]
|
9cf2834d57157f7fca77565422401426660e2a71 | # Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mhmtcrkglu/guanaco-llama2-1k | [
"region:us"
]
| 2023-10-31T14:16:22+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 966693, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T14:16:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "guanaco-llama2-1k"
More Information needed | [
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
]
|
11067f84bbe568fe2ab58d2902c6a79768456754 | Airoboros 3.1 dataset with the spicy/decensorship data re-added. | unalignment/spicy-3.1 | [
"license:cc-by-4.0",
"not-for-all-audiences",
"region:us"
]
| 2023-10-31T14:31:52+00:00 | {"license": "cc-by-4.0", "tags": ["not-for-all-audiences"]} | 2023-12-26T18:08:45+00:00 | []
| []
| TAGS
#license-cc-by-4.0 #not-for-all-audiences #region-us
| Airoboros 3.1 dataset with the spicy/decensorship data re-added. | []
| [
"TAGS\n#license-cc-by-4.0 #not-for-all-audiences #region-us \n"
]
| [
24
]
| [
"passage: TAGS\n#license-cc-by-4.0 #not-for-all-audiences #region-us \n"
]
|
c3da8802eabc66931efd859931414ecd68d9b1d3 | # Coding Tutorials
This comprehensive dataset consists of **500,000** documents, summing up to around **1.5 billion** tokens.
Predominantly composed of coding tutorials, it has been meticulously compiled from various web crawl datasets like **RefinedWeb**, **OSCAR**, and **Escorpius**.
The selection process involved a stringent filtering of files using regular expressions to ensure the inclusion of content that contains programming code (most of them).
These tutorials offer more than mere code snippets.
They provide an extensive context, including the rationale behind the code, the problem being addressed, and detailed step-by-step instructions.
This layered context is helpful for training a code-LM model, enabling it to discern the user intent behind a piece of code and thus facilitating more contextually relevant assistance.
### Programming Language Distribution
```
cpp ▏ 39% █████████████████████████
python ▏ 25% ████████████████
java ▏ 16% ██████████
csharp ▏ 3% ██
javascript ▏ 1% ▋
kotlin ▏ 1% ▋
other ▏ 14% █████████
```
### Natural language distribution
```
en ▏ 80% █████████████████████████
ru ▏ 16% █████
zh ▏ 2% ▋
es ▏ 2% ▋
``` | mponty/code_tutorials | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"language:ru",
"language:zh",
"language:es",
"code",
"region:us"
]
| 2023-10-31T14:32:09+00:00 | {"language": ["en", "ru", "zh", "es"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "k", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "dump", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3124929718.313386, "num_examples": 518410}], "download_size": 2971113091, "dataset_size": 3124929718.313386}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["code"]} | 2023-11-01T02:22:58+00:00 | []
| [
"en",
"ru",
"zh",
"es"
]
| TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #language-Russian #language-Chinese #language-Spanish #code #region-us
| # Coding Tutorials
This comprehensive dataset consists of 500,000 documents, summing up to around 1.5 billion tokens.
Predominantly composed of coding tutorials, it has been meticulously compiled from various web crawl datasets like RefinedWeb, OSCAR, and Escorpius.
The selection process involved a stringent filtering of files using regular expressions to ensure the inclusion of content that contains programming code (most of them).
These tutorials offer more than mere code snippets.
They provide an extensive context, including the rationale behind the code, the problem being addressed, and detailed step-by-step instructions.
This layered context is helpful for training a code-LM model, enabling it to discern the user intent behind a piece of code and thus facilitating more contextually relevant assistance.
### Programming Language Distribution
### Natural language distribution
| [
"# Coding Tutorials\n\nThis comprehensive dataset consists of 500,000 documents, summing up to around 1.5 billion tokens. \nPredominantly composed of coding tutorials, it has been meticulously compiled from various web crawl datasets like RefinedWeb, OSCAR, and Escorpius.\nThe selection process involved a stringent filtering of files using regular expressions to ensure the inclusion of content that contains programming code (most of them).\n\nThese tutorials offer more than mere code snippets.\nThey provide an extensive context, including the rationale behind the code, the problem being addressed, and detailed step-by-step instructions. \nThis layered context is helpful for training a code-LM model, enabling it to discern the user intent behind a piece of code and thus facilitating more contextually relevant assistance.",
"### Programming Language Distribution",
"### Natural language distribution"
]
| [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #language-Russian #language-Chinese #language-Spanish #code #region-us \n",
"# Coding Tutorials\n\nThis comprehensive dataset consists of 500,000 documents, summing up to around 1.5 billion tokens. \nPredominantly composed of coding tutorials, it has been meticulously compiled from various web crawl datasets like RefinedWeb, OSCAR, and Escorpius.\nThe selection process involved a stringent filtering of files using regular expressions to ensure the inclusion of content that contains programming code (most of them).\n\nThese tutorials offer more than mere code snippets.\nThey provide an extensive context, including the rationale behind the code, the problem being addressed, and detailed step-by-step instructions. \nThis layered context is helpful for training a code-LM model, enabling it to discern the user intent behind a piece of code and thus facilitating more contextually relevant assistance.",
"### Programming Language Distribution",
"### Natural language distribution"
]
| [
50,
186,
7,
5
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #language-Russian #language-Chinese #language-Spanish #code #region-us \n# Coding Tutorials\n\nThis comprehensive dataset consists of 500,000 documents, summing up to around 1.5 billion tokens. \nPredominantly composed of coding tutorials, it has been meticulously compiled from various web crawl datasets like RefinedWeb, OSCAR, and Escorpius.\nThe selection process involved a stringent filtering of files using regular expressions to ensure the inclusion of content that contains programming code (most of them).\n\nThese tutorials offer more than mere code snippets.\nThey provide an extensive context, including the rationale behind the code, the problem being addressed, and detailed step-by-step instructions. \nThis layered context is helpful for training a code-LM model, enabling it to discern the user intent behind a piece of code and thus facilitating more contextually relevant assistance.### Programming Language Distribution### Natural language distribution"
]
|
f6e6480c8a26cf63436a5b5be96861a86f222ad5 | # Dataset Card for "shEMO_nosplits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | minoosh/shEMO_nosplits | [
"region:us"
]
| 2023-10-31T14:36:38+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "emotion", "dtype": {"class_label": {"names": {"0": "A", "1": "H", "2": "N", "3": "S", "4": "W", "5": "F"}}}}], "splits": [{"name": "train", "num_bytes": 1063025462.0, "num_examples": 3000}], "download_size": 1043899084, "dataset_size": 1063025462.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T14:37:27+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "shEMO_nosplits"
More Information needed | [
"# Dataset Card for \"shEMO_nosplits\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"shEMO_nosplits\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"shEMO_nosplits\"\n\nMore Information needed"
]
|
ad21d86b8cde0fd2e99a28335e8a5fd4bdc8a672 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | smanut/train-test-dataset-example | [
"region:us"
]
| 2023-10-31T14:37:31+00:00 | {} | 2023-10-31T15:22:53+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
762d1a49abbd281cb961d2a729a956edcc487507 | # Dataset Card for "ultrafeedback-instruction-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | alvarobartt/ultrafeedback-instruction-dataset | [
"region:us"
]
| 2023-10-31T14:51:32+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "generations", "sequence": "string"}, {"name": "raw_generation_response", "sequence": "string"}, {"name": "rating", "sequence": "int64"}, {"name": "rationale", "sequence": "string"}, {"name": "raw_labelling_response", "struct": [{"name": "choices", "list": [{"name": "finish_reason", "dtype": "string"}, {"name": "index", "dtype": "int64"}, {"name": "message", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}]}, {"name": "created", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "object", "dtype": "string"}, {"name": "usage", "struct": [{"name": "completion_tokens", "dtype": "int64"}, {"name": "prompt_tokens", "dtype": "int64"}, {"name": "total_tokens", "dtype": "int64"}]}]}], "splits": [{"name": "train", "num_bytes": 167493, "num_examples": 50}], "download_size": 98372, "dataset_size": 167493}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T14:51:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ultrafeedback-instruction-dataset"
More Information needed | [
"# Dataset Card for \"ultrafeedback-instruction-dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ultrafeedback-instruction-dataset\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ultrafeedback-instruction-dataset\"\n\nMore Information needed"
]
|
12d7a7639ff1d4d2582fb255838d0ec72d8bc7b4 | # Dataset Card for "find_first_sent_train_10_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_first_sent_train_10_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T14:57:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 69119, "num_examples": 50}, {"name": "validation", "num_bytes": 9130, "num_examples": 10}], "download_size": 45538, "dataset_size": 78249}} | 2023-10-31T14:57:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_first_sent_train_10_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_first_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_first_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
30
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_first_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
|
68b1db72b97753d32fb2311e2ab80ffe20b1c59c | # Dataset Card for "find_second_sent_train_10_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_second_sent_train_10_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T14:57:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68758, "num_examples": 50}, {"name": "validation", "num_bytes": 8997, "num_examples": 10}], "download_size": 47774, "dataset_size": 77755}} | 2023-10-31T14:57:32+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_second_sent_train_10_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_second_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_second_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_second_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
|
71b4c1bb9b5e4f9619b0351c99c05dda1462c812 | # Dataset Card for "find_last_sent_train_10_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_last_sent_train_10_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T14:57:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68765, "num_examples": 50}, {"name": "validation", "num_bytes": 8980, "num_examples": 10}], "download_size": 52757, "dataset_size": 77745}} | 2023-10-31T14:57:37+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_last_sent_train_10_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_last_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_last_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_last_sent_train_10_eval_10_sentbefore\"\n\nMore Information needed"
]
|
84ac7cb26a5eb5177bb42799348102c8345f534e | # Dataset Card for "find_first_sent_train_100_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_first_sent_train_100_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T14:59:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 435057, "num_examples": 320}, {"name": "validation", "num_bytes": 10399, "num_examples": 10}], "download_size": 136011, "dataset_size": 445456}} | 2023-10-31T14:59:11+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_first_sent_train_100_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_first_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_first_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
30
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_first_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
|
93b9924bb0b6d8ac1424cccabedadf27b2fca8b1 | # Dataset Card for "find_second_sent_train_100_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_second_sent_train_100_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T14:59:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 433640, "num_examples": 320}, {"name": "validation", "num_bytes": 9977, "num_examples": 10}], "download_size": 157161, "dataset_size": 443617}} | 2023-10-31T14:59:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_second_sent_train_100_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_second_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_second_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_second_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
|
075a929079d1a631a14ae9be6dbda31414a9caf5 | # Dataset Card for "find_last_sent_train_100_eval_10_sentbefore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_last_sent_train_100_eval_10_sentbefore | [
"region:us"
]
| 2023-10-31T14:59:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 434031, "num_examples": 320}, {"name": "validation", "num_bytes": 10271, "num_examples": 10}], "download_size": 179279, "dataset_size": 444302}} | 2023-10-31T14:59:23+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "find_last_sent_train_100_eval_10_sentbefore"
More Information needed | [
"# Dataset Card for \"find_last_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_last_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
| [
6,
29
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"find_last_sent_train_100_eval_10_sentbefore\"\n\nMore Information needed"
]
|
2133d9be80abc89faa4ca1c92a56b35e4da6b26d | # Dataset Card for "veshti-controlnet-v4-canny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | stsudharsan/veshti-controlnet-v4-canny | [
"region:us"
]
| 2023-10-31T15:07:26+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_img", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29728534.0, "num_examples": 143}], "download_size": 28847175, "dataset_size": 29728534.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T15:07:34+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "veshti-controlnet-v4-canny"
More Information needed | [
"# Dataset Card for \"veshti-controlnet-v4-canny\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"veshti-controlnet-v4-canny\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"veshti-controlnet-v4-canny\"\n\nMore Information needed"
]
|
0f18cc266570137cb12dcbfc01353cb7a8887a56 | # Dataset Card for "whisper_speechcommandsV2_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | moonseok/whisper_speechcommandsV2_data | [
"region:us"
]
| 2023-10-31T15:46:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "yes", "1": "no", "2": "up", "3": "down", "4": "left", "5": "right", "6": "on", "7": "off", "8": "stop", "9": "go", "10": "zero", "11": "one", "12": "two", "13": "three", "14": "four", "15": "five", "16": "six", "17": "seven", "18": "eight", "19": "nine", "20": "bed", "21": "bird", "22": "cat", "23": "dog", "24": "happy", "25": "house", "26": "marvin", "27": "sheila", "28": "tree", "29": "wow", "30": "backward", "31": "forward", "32": "follow", "33": "learn", "34": "visual", "35": "_silence_"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 81484786167, "num_examples": 84848}, {"name": "validation", "num_bytes": 9586332258, "num_examples": 9982}, {"name": "test", "num_bytes": 4696163330, "num_examples": 4890}], "download_size": 2260418103, "dataset_size": 95767281755}} | 2023-10-31T19:39:30+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "whisper_speechcommandsV2_data"
More Information needed | [
"# Dataset Card for \"whisper_speechcommandsV2_data\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"whisper_speechcommandsV2_data\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"whisper_speechcommandsV2_data\"\n\nMore Information needed"
]
|
8271215f5d311fa4fda7aa39e4f4b45c263c5a29 |
<p align="center">
💻 <a href="https://github.com/ChiyuSONG/dynamics-of-instruction-tuning" target="_blank">[Github Repo]</a> • 📃 <a href="https://arxiv.org/abs/2310.19651" target="_blank">[Paper]</a> • 👀 <a href="https://huggingface.co/datasets/ChiyuSONG/dynamics-of-instruction-tuning/blob/main/preview.json" target="_blank">[Preview]</a>
</p>
#### Update
12/01/23: Corrected ambiguous choices in the validation and test sets of the role-play chat data.
## Overview
This is a collection of over 40k human-curated instruction-output pairs in Chinese. The dataset is organized into ten representative ability categories: (1) STEM subject - Biology, (2) Humanity subject - History, (3) Code Generation, (4) Creative Writing, (5) Language proficiency - Chinese, (6) Dialogue Understanding, (7) Role-play Chat, (8) Logical Reasoning, (9) Chain of Thought, and (10) Ethics.
| Ability | Data Source | Data Size |
|---|---|---|
|STEM - Biology|[COIG - Exam](https://github.com/BAAI-Zlab/COIG#exam-instructions-63532)|1,242|
|Humanity - History|[COIG - Exam](https://github.com/BAAI-Zlab/COIG#exam-instructions-63532)|2,093|
|Code Generation|[Leetcode](https://leetcode.cn/)|5,168|
|Creative Writing|User Queries from In-House Data|1,200|
|Chinese|[COIG - Exam](https://github.com/BAAI-Zlab/COIG#exam-instructions-63532)|1,650|
|Dialogue Understanding|[C3-D](https://dataset.org/c3/)|5,085|
|Role-play Chat|[BELLE](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)|1,200|
|Logical Reasoning|[LogiQA2.0](https://github.com/csitfun/LogiQA2.0)|12,951|
|COT for Grad-Math|[PRM800K](https://github.com/openai/prm800k)|11,701|
|Ethics|[COIG - Human Value](https://github.com/BAAI-Zlab/COIG#human-value-alignment-instructions-34471)|1,200|
Each data instance is meticulously reviewed by human annotators after collection to maintain quality control. For in-depth information on the annotation process and the variations in the development of each ability during instruction tuning, please refer to our [Paper](https://arxiv.org/abs/2310.19651) and [Github Repo](https://github.com/ChiyuSONG/dynamics-of-instruction-tuning).
## Data Format
```javascript
// As demonstrated in the preview
{
// "messages" contains the instruction-output pairs.
"messages":[{"role":"user", "content":"xxxxx"}, {"role":"assistant", "content":"xxxxx"}]
// Data id, ids are independent for each ability category.
"idx": 100
// Name of its ability category.
"type": "role-play"
// "0" means it is a exact-match question, "1" means it is a open-ended question
"question_format": 1
// optional, only for evaluating open-ended questions in valid and test sets.
"choices":[gold_answer, fine-grained corruption, coarse-grained corruption]
}
```
For more details on data usage in model training and evaluation, please refer to our [Paper](https://arxiv.org/abs/2310.19651) and [Github Repo](https://github.com/ChiyuSONG/dynamics-of-instruction-tuning).
## Citation
```
@ARTICLE{2023arXiv231019651S,
author = {{Song}, Chiyu and {Zhou}, Zhanchao and {Yan}, Jianhao and {Fei}, Yuejiao and {Lan}, Zhenzhong and {Zhang}, Yue},
title = "{Dynamics of Instruction Tuning: Each Ability of Large Language Models Has Its Own Growth Pace}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2023,
month = oct,
eid = {arXiv:2310.19651},
pages = {arXiv:2310.19651},
archivePrefix = {arXiv},
eprint = {2310.19651},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2023arXiv231019651S},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
``` | ChiyuSONG/dynamics-of-instruction-tuning | [
"task_categories:text-generation",
"language:zh",
"license:mit",
"arxiv:2310.19651",
"region:us"
]
| 2023-10-31T15:52:49+00:00 | {"language": ["zh"], "license": "mit", "task_categories": ["text-generation"], "viewer": false} | 2023-12-01T18:21:45+00:00 | [
"2310.19651"
]
| [
"zh"
]
| TAGS
#task_categories-text-generation #language-Chinese #license-mit #arxiv-2310.19651 #region-us
|
[[Github Repo]](URL target=) • [[Paper]](URL target=) • [[Preview]](URL target=)
#### Update
12/01/23: Corrected ambiguous choices in the validation and test sets of the role-play chat data.
Overview
--------
This is a collection of over 40k human-curated instruction-output pairs in Chinese. The dataset is organized into ten representative ability categories: (1) STEM subject - Biology, (2) Humanity subject - History, (3) Code Generation, (4) Creative Writing, (5) Language proficiency - Chinese, (6) Dialogue Understanding, (7) Role-play Chat, (8) Logical Reasoning, (9) Chain of Thought, and (10) Ethics.
Ability: STEM - Biology, Data Source: COIG - Exam, Data Size: 1,242
Ability: Humanity - History, Data Source: COIG - Exam, Data Size: 2,093
Ability: Code Generation, Data Source: Leetcode, Data Size: 5,168
Ability: Creative Writing, Data Source: User Queries from In-House Data, Data Size: 1,200
Ability: Chinese, Data Source: COIG - Exam, Data Size: 1,650
Ability: Dialogue Understanding, Data Source: C3-D, Data Size: 5,085
Ability: Role-play Chat, Data Source: BELLE, Data Size: 1,200
Ability: Logical Reasoning, Data Source: LogiQA2.0, Data Size: 12,951
Ability: COT for Grad-Math, Data Source: PRM800K, Data Size: 11,701
Ability: Ethics, Data Source: COIG - Human Value, Data Size: 1,200
Each data instance is meticulously reviewed by human annotators after collection to maintain quality control. For in-depth information on the annotation process and the variations in the development of each ability during instruction tuning, please refer to our Paper and Github Repo.
Data Format
-----------
For more details on data usage in model training and evaluation, please refer to our Paper and Github Repo.
| [
"#### Update\n\n\n12/01/23: Corrected ambiguous choices in the validation and test sets of the role-play chat data.\n\n\nOverview\n--------\n\n\nThis is a collection of over 40k human-curated instruction-output pairs in Chinese. The dataset is organized into ten representative ability categories: (1) STEM subject - Biology, (2) Humanity subject - History, (3) Code Generation, (4) Creative Writing, (5) Language proficiency - Chinese, (6) Dialogue Understanding, (7) Role-play Chat, (8) Logical Reasoning, (9) Chain of Thought, and (10) Ethics.\n\n\nAbility: STEM - Biology, Data Source: COIG - Exam, Data Size: 1,242\nAbility: Humanity - History, Data Source: COIG - Exam, Data Size: 2,093\nAbility: Code Generation, Data Source: Leetcode, Data Size: 5,168\nAbility: Creative Writing, Data Source: User Queries from In-House Data, Data Size: 1,200\nAbility: Chinese, Data Source: COIG - Exam, Data Size: 1,650\nAbility: Dialogue Understanding, Data Source: C3-D, Data Size: 5,085\nAbility: Role-play Chat, Data Source: BELLE, Data Size: 1,200\nAbility: Logical Reasoning, Data Source: LogiQA2.0, Data Size: 12,951\nAbility: COT for Grad-Math, Data Source: PRM800K, Data Size: 11,701\nAbility: Ethics, Data Source: COIG - Human Value, Data Size: 1,200\n\n\nEach data instance is meticulously reviewed by human annotators after collection to maintain quality control. For in-depth information on the annotation process and the variations in the development of each ability during instruction tuning, please refer to our Paper and Github Repo.\n\n\nData Format\n-----------\n\n\nFor more details on data usage in model training and evaluation, please refer to our Paper and Github Repo."
]
| [
"TAGS\n#task_categories-text-generation #language-Chinese #license-mit #arxiv-2310.19651 #region-us \n",
"#### Update\n\n\n12/01/23: Corrected ambiguous choices in the validation and test sets of the role-play chat data.\n\n\nOverview\n--------\n\n\nThis is a collection of over 40k human-curated instruction-output pairs in Chinese. The dataset is organized into ten representative ability categories: (1) STEM subject - Biology, (2) Humanity subject - History, (3) Code Generation, (4) Creative Writing, (5) Language proficiency - Chinese, (6) Dialogue Understanding, (7) Role-play Chat, (8) Logical Reasoning, (9) Chain of Thought, and (10) Ethics.\n\n\nAbility: STEM - Biology, Data Source: COIG - Exam, Data Size: 1,242\nAbility: Humanity - History, Data Source: COIG - Exam, Data Size: 2,093\nAbility: Code Generation, Data Source: Leetcode, Data Size: 5,168\nAbility: Creative Writing, Data Source: User Queries from In-House Data, Data Size: 1,200\nAbility: Chinese, Data Source: COIG - Exam, Data Size: 1,650\nAbility: Dialogue Understanding, Data Source: C3-D, Data Size: 5,085\nAbility: Role-play Chat, Data Source: BELLE, Data Size: 1,200\nAbility: Logical Reasoning, Data Source: LogiQA2.0, Data Size: 12,951\nAbility: COT for Grad-Math, Data Source: PRM800K, Data Size: 11,701\nAbility: Ethics, Data Source: COIG - Human Value, Data Size: 1,200\n\n\nEach data instance is meticulously reviewed by human annotators after collection to maintain quality control. For in-depth information on the annotation process and the variations in the development of each ability during instruction tuning, please refer to our Paper and Github Repo.\n\n\nData Format\n-----------\n\n\nFor more details on data usage in model training and evaluation, please refer to our Paper and Github Repo."
]
| [
36,
435
]
| [
"passage: TAGS\n#task_categories-text-generation #language-Chinese #license-mit #arxiv-2310.19651 #region-us \n#### Update\n\n\n12/01/23: Corrected ambiguous choices in the validation and test sets of the role-play chat data.\n\n\nOverview\n--------\n\n\nThis is a collection of over 40k human-curated instruction-output pairs in Chinese. The dataset is organized into ten representative ability categories: (1) STEM subject - Biology, (2) Humanity subject - History, (3) Code Generation, (4) Creative Writing, (5) Language proficiency - Chinese, (6) Dialogue Understanding, (7) Role-play Chat, (8) Logical Reasoning, (9) Chain of Thought, and (10) Ethics.\n\n\nAbility: STEM - Biology, Data Source: COIG - Exam, Data Size: 1,242\nAbility: Humanity - History, Data Source: COIG - Exam, Data Size: 2,093\nAbility: Code Generation, Data Source: Leetcode, Data Size: 5,168\nAbility: Creative Writing, Data Source: User Queries from In-House Data, Data Size: 1,200\nAbility: Chinese, Data Source: COIG - Exam, Data Size: 1,650\nAbility: Dialogue Understanding, Data Source: C3-D, Data Size: 5,085\nAbility: Role-play Chat, Data Source: BELLE, Data Size: 1,200\nAbility: Logical Reasoning, Data Source: LogiQA2.0, Data Size: 12,951\nAbility: COT for Grad-Math, Data Source: PRM800K, Data Size: 11,701\nAbility: Ethics, Data Source: COIG - Human Value, Data Size: 1,200\n\n\nEach data instance is meticulously reviewed by human annotators after collection to maintain quality control. For in-depth information on the annotation process and the variations in the development of each ability during instruction tuning, please refer to our Paper and Github Repo.\n\n\nData Format\n-----------\n\n\nFor more details on data usage in model training and evaluation, please refer to our Paper and Github Repo."
]
|
67a005756b57489be18d3ab05ceaa29eec422f6c | # Dataset Card for "cai-conversation-dev1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vwxyzjn/cai-conversation-dev1 | [
"region:us"
]
| 2023-10-31T15:55:26+00:00 | {"dataset_info": {"features": [{"name": "init_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "init_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "critic_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_prompt", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "revision_response", "struct": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 21802544, "num_examples": 16384}], "download_size": 8704173, "dataset_size": 21802544}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T17:08:11+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "cai-conversation-dev1"
More Information needed | [
"# Dataset Card for \"cai-conversation-dev1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"cai-conversation-dev1\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"cai-conversation-dev1\"\n\nMore Information needed"
]
|
1830a3f98361c5002f49bfdca6ad55e8d31f2388 | # Dataset Card for "resultspublic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gaia-benchmark/results_public | [
"region:us"
]
| 2023-10-31T16:03:44+00:00 | {"configs": [{"config_name": "2023", "data_files": [{"split": "test", "path": "2023/test-*"}, {"split": "validation", "path": "2023/validation-*"}]}, {"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": [{"config_name": "2023", "features": [{"name": "model", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "score_level1", "dtype": "float64"}, {"name": "score_level2", "dtype": "float64"}, {"name": "score_level3", "dtype": "float64"}, {"name": "organisation", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "model_family", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3106, "num_examples": 8}, {"name": "validation", "num_bytes": 2943, "num_examples": 6}], "download_size": 8104, "dataset_size": 6049}, {"config_name": "default", "features": [{"name": "model", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "score_level1", "dtype": "float64"}, {"name": "score_level2", "dtype": "float64"}, {"name": "score_level3", "dtype": "float64"}, {"name": "organisation", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "model_family", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2938, "num_examples": 6}, {"name": "validation", "num_bytes": 2943, "num_examples": 6}], "download_size": 16062, "dataset_size": 5881}]} | 2024-02-14T19:32:50+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "resultspublic"
More Information needed | [
"# Dataset Card for \"resultspublic\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"resultspublic\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"resultspublic\"\n\nMore Information needed"
]
|
c6a794ad30798487ce1474f3740b1b9a79b07810 | # Dataset Card for "tokenized_t5_context_len_512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yardeny/tokenized_t5_context_len_512 | [
"region:us"
]
| 2023-10-31T16:05:55+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 18454819544, "num_examples": 80462898}], "download_size": 6941163760, "dataset_size": 18454819544}} | 2023-10-31T16:24:50+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tokenized_t5_context_len_512"
More Information needed | [
"# Dataset Card for \"tokenized_t5_context_len_512\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_t5_context_len_512\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_t5_context_len_512\"\n\nMore Information needed"
]
|
d8338457d187715753d23576b91eccbc925d642c | # Dataset Card for "processed_t5_context_len_512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yardeny/processed_t5_context_len_512 | [
"region:us"
]
| 2023-10-31T16:33:06+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 17763634104.0, "num_examples": 6917303}], "download_size": 6975018960, "dataset_size": 17763634104.0}} | 2023-10-31T17:18:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "processed_t5_context_len_512"
More Information needed | [
"# Dataset Card for \"processed_t5_context_len_512\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_t5_context_len_512\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_t5_context_len_512\"\n\nMore Information needed"
]
|
d1d1f51c2b8b2a6105ac94eff75b6204ef0a1d0d | # Dataset Card for "identity_finetune_data_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sayan1101/identity_finetune_data_2 | [
"region:us"
]
| 2023-10-31T16:49:16+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 387168, "num_examples": 1181}, {"name": "test", "num_bytes": 66396, "num_examples": 209}], "download_size": 221210, "dataset_size": 453564}} | 2023-10-31T16:49:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "identity_finetune_data_2"
More Information needed | [
"# Dataset Card for \"identity_finetune_data_2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"identity_finetune_data_2\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"identity_finetune_data_2\"\n\nMore Information needed"
]
|
8a48274854909717f0f9acd1c3400a320f57ba85 | # おーぷん2ちゃんねる対話コーパス
## Dataset Details
### Dataset Description
[おーぷん2ちゃんねる対話コーパス](https://github.com/1never/open2ch-dialogue-corpus) を Huggingface Datasets 向けに変換したものになります。
- **Curated by:** [More Information Needed]
- **Language:** Japanese
- **License:** Apache-2.0
### Dataset Sources
- **Repository:** https://github.com/1never/open2ch-dialogue-corpus
## Dataset Structure
- `all-corpus`: `livejupiter`, `news4vip`, `newsplus` サブセットを連結したもの
- `dialogue`: 対話データ (`list[dict]`)
- `speaker`: 話者番号。`1` または `2`。
- `content`: 発言内容
- `board`: 連結元のサブセット名
- `livejupiter`: オリジナルのデータセットでの `livejupiter.tsv` から変換されたデータ。
- `dialogue`: 対話データ (`list[dict]`)
- `speaker`: 話者番号。`1` または `2`。
- `content`: 発言内容
- `news4vip`: オリジナルのデータセットでの `news4vip.tsv` から変換されたデータ。
- 構造は同上
- `newsplus`: オリジナルのデータセットでの `newsplus.tsv` から変換されたデータ。
- 構造は同上
- `ranking`: 応答順位付けタスク用データ (オリジナルデータセットでの `ranking.zip`)
- `train` と `test` split があり、それぞれはオリジナルデータセットの `dev.tsv` と `test.tsv` に対応します。
- `dialogue`: 対話データ (`list[dict]`)
- `speaker`: 話者番号。`1` または `2`。
- `content`: 発言内容
- `next`: 対話の次に続く正解の応答 (`dict`)
- `speaker`: 話者番号。`1` または `2`
- `content`: 発言内容
- `random`: ランダムに選ばれた応答 9 個 (`list[str]`)
また、`all-corpus`, `livejupiter`, `news4vip`, `newsplus` にはそれぞれ名前に `-cleaned` が付与されたバージョンがあり、これらのサブセットではオリジナルのデータセットで配布されていた NG ワードリストを利用してフィルタリングされたものです。
オリジナルのデータセットでは各発言内の改行は `__BR__` に置換されていますが、このデータセットではすべて `\n` に置き換えられています。
## Dataset Creation
### Source Data
(オリジナルのデータセットの説明より)
> おーぷん2ちゃんねるの「なんでも実況(ジュピター)」「ニュー速VIP」「ニュース速報+」の3つの掲示板をクロールして作成した対話コーパスです. おーぷん2ちゃんねる開設時から2019年7月20日までのデータを使用して作成しました.
#### Data Collection and Processing
[オリジナルのデータセット](https://github.com/1never/open2ch-dialogue-corpus) を参照。
#### Personal and Sensitive Information
`-cleaned` ではないサブセットでは、非常に不適切な表現が多いため注意が必要です。
## Usage
```py
from datasets import load_dataset
ds = load_dataset(
"p1atdev/open2ch",
name="all-corpus",
)
print(ds)
print(ds["train"][0])
# DatasetDict({
# train: Dataset({
# features: ['dialogue', 'board'],
# num_rows: 8134707
# })
# })
# {'dialogue': {'speaker': [1, 2], 'content': ['実況スレをたてる', 'おんj民の鑑']}, 'board': 'livejupiter'}
``` | p1atdev/open2ch | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:ja",
"license:apache-2.0",
"not-for-all-audiences",
"region:us"
]
| 2023-10-31T17:00:13+00:00 | {"language": ["ja"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "text2text-generation"], "dataset_info": [{"config_name": "all-corpus", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}, {"name": "board", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1693355620, "num_examples": 8134707}], "download_size": 868453263, "dataset_size": 1693355620}, {"config_name": "all-corpus-cleaned", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}, {"name": "board", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1199092499, "num_examples": 6192730}], "download_size": 615570076, "dataset_size": 1199092499}, {"config_name": "livejupiter", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1101433134, "num_examples": 5943594}], "download_size": 592924274, "dataset_size": 1101433134}, {"config_name": "livejupiter-cleaned", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 807499499, "num_examples": 4650253}], "download_size": 437414714, "dataset_size": 807499499}, {"config_name": "news4vip", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 420403926, "num_examples": 1973817}], "download_size": 240974172, "dataset_size": 420403926}, {"config_name": "news4vip-cleaned", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 269941607, "num_examples": 1402903}], "download_size": 156934128, "dataset_size": 269941607}, {"config_name": "newsplus", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 56071294, "num_examples": 217296}], "download_size": 32368053, "dataset_size": 56071294}, {"config_name": "newsplus-cleaned", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 33387874, "num_examples": 139574}], "download_size": 19556120, "dataset_size": 33387874}, {"config_name": "ranking", "features": [{"name": "dialogue", "sequence": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}, {"name": "next", "struct": [{"name": "speaker", "dtype": "int8"}, {"name": "content", "dtype": "string"}]}, {"name": "random", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1605628, "num_examples": 2000}, {"name": "test", "num_bytes": 1604356, "num_examples": 1953}], "download_size": 2127033, "dataset_size": 3209984}], "configs": [{"config_name": "all-corpus", "data_files": [{"split": "train", "path": "all-corpus/train-*"}]}, {"config_name": "all-corpus-cleaned", "data_files": [{"split": "train", "path": "all-corpus-cleaned/train-*"}]}, {"config_name": "livejupiter", "data_files": [{"split": "train", "path": "livejupiter/train-*"}]}, {"config_name": "livejupiter-cleaned", "data_files": [{"split": "train", "path": "livejupiter-cleaned/train-*"}]}, {"config_name": "news4vip", "data_files": [{"split": "train", "path": "news4vip/train-*"}]}, {"config_name": "news4vip-cleaned", "data_files": [{"split": "train", "path": "news4vip-cleaned/train-*"}]}, {"config_name": "newsplus", "data_files": [{"split": "train", "path": "newsplus/train-*"}]}, {"config_name": "newsplus-cleaned", "data_files": [{"split": "train", "path": "newsplus-cleaned/train-*"}]}, {"config_name": "ranking", "data_files": [{"split": "train", "path": "ranking/train-*"}, {"split": "test", "path": "ranking/test-*"}]}], "tags": ["not-for-all-audiences"]} | 2023-11-01T01:59:51+00:00 | []
| [
"ja"
]
| TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-1M<n<10M #language-Japanese #license-apache-2.0 #not-for-all-audiences #region-us
| # おーぷん2ちゃんねる対話コーパス
## Dataset Details
### Dataset Description
おーぷん2ちゃんねる対話コーパス を Huggingface Datasets 向けに変換したものになります。
- Curated by:
- Language: Japanese
- License: Apache-2.0
### Dataset Sources
- Repository: URL
## Dataset Structure
- 'all-corpus': 'livejupiter', 'news4vip', 'newsplus' サブセットを連結したもの
- 'dialogue': 対話データ ('list[dict]')
- 'speaker': 話者番号。'1' または '2'。
- 'content': 発言内容
- 'board': 連結元のサブセット名
- 'livejupiter': オリジナルのデータセットでの 'URL' から変換されたデータ。
- 'dialogue': 対話データ ('list[dict]')
- 'speaker': 話者番号。'1' または '2'。
- 'content': 発言内容
- 'news4vip': オリジナルのデータセットでの 'URL' から変換されたデータ。
- 構造は同上
- 'newsplus': オリジナルのデータセットでの 'URL' から変換されたデータ。
- 構造は同上
- 'ranking': 応答順位付けタスク用データ (オリジナルデータセットでの 'URL')
- 'train' と 'test' split があり、それぞれはオリジナルデータセットの 'URL' と 'URL' に対応します。
- 'dialogue': 対話データ ('list[dict]')
- 'speaker': 話者番号。'1' または '2'。
- 'content': 発言内容
- 'next': 対話の次に続く正解の応答 ('dict')
- 'speaker': 話者番号。'1' または '2'
- 'content': 発言内容
- 'random': ランダムに選ばれた応答 9 個 ('list[str]')
また、'all-corpus', 'livejupiter', 'news4vip', 'newsplus' にはそれぞれ名前に '-cleaned' が付与されたバージョンがあり、これらのサブセットではオリジナルのデータセットで配布されていた NG ワードリストを利用してフィルタリングされたものです。
オリジナルのデータセットでは各発言内の改行は '__BR__' に置換されていますが、このデータセットではすべて '\n' に置き換えられています。
## Dataset Creation
### Source Data
(オリジナルのデータセットの説明より)
> おーぷん2ちゃんねるの「なんでも実況(ジュピター)」「ニュー速VIP」「ニュース速報+」の3つの掲示板をクロールして作成した対話コーパスです. おーぷん2ちゃんねる開設時から2019年7月20日までのデータを使用して作成しました.
#### Data Collection and Processing
オリジナルのデータセット を参照。
#### Personal and Sensitive Information
'-cleaned' ではないサブセットでは、非常に不適切な表現が多いため注意が必要です。
## Usage
| [
"# おーぷん2ちゃんねる対話コーパス",
"## Dataset Details",
"### Dataset Description\n\nおーぷん2ちゃんねる対話コーパス を Huggingface Datasets 向けに変換したものになります。\n\n- Curated by: \n- Language: Japanese\n- License: Apache-2.0",
"### Dataset Sources\n\n- Repository: URL",
"## Dataset Structure\n\n- 'all-corpus': 'livejupiter', 'news4vip', 'newsplus' サブセットを連結したもの\n - 'dialogue': 対話データ ('list[dict]')\n - 'speaker': 話者番号。'1' または '2'。\n - 'content': 発言内容\n - 'board': 連結元のサブセット名\n\n- 'livejupiter': オリジナルのデータセットでの 'URL' から変換されたデータ。\n - 'dialogue': 対話データ ('list[dict]')\n - 'speaker': 話者番号。'1' または '2'。\n - 'content': 発言内容\n- 'news4vip': オリジナルのデータセットでの 'URL' から変換されたデータ。\n - 構造は同上\n- 'newsplus': オリジナルのデータセットでの 'URL' から変換されたデータ。\n - 構造は同上\n \n- 'ranking': 応答順位付けタスク用データ (オリジナルデータセットでの 'URL')\n - 'train' と 'test' split があり、それぞれはオリジナルデータセットの 'URL' と 'URL' に対応します。\n - 'dialogue': 対話データ ('list[dict]')\n - 'speaker': 話者番号。'1' または '2'。\n - 'content': 発言内容\n - 'next': 対話の次に続く正解の応答 ('dict')\n - 'speaker': 話者番号。'1' または '2'\n - 'content': 発言内容\n - 'random': ランダムに選ばれた応答 9 個 ('list[str]')\n\nまた、'all-corpus', 'livejupiter', 'news4vip', 'newsplus' にはそれぞれ名前に '-cleaned' が付与されたバージョンがあり、これらのサブセットではオリジナルのデータセットで配布されていた NG ワードリストを利用してフィルタリングされたものです。\n\nオリジナルのデータセットでは各発言内の改行は '__BR__' に置換されていますが、このデータセットではすべて '\\n' に置き換えられています。",
"## Dataset Creation",
"### Source Data\n\n(オリジナルのデータセットの説明より)\n> おーぷん2ちゃんねるの「なんでも実況(ジュピター)」「ニュー速VIP」「ニュース速報+」の3つの掲示板をクロールして作成した対話コーパスです. おーぷん2ちゃんねる開設時から2019年7月20日までのデータを使用して作成しました.",
"#### Data Collection and Processing\n\nオリジナルのデータセット を参照。",
"#### Personal and Sensitive Information\n\n'-cleaned' ではないサブセットでは、非常に不適切な表現が多いため注意が必要です。",
"## Usage"
]
| [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-1M<n<10M #language-Japanese #license-apache-2.0 #not-for-all-audiences #region-us \n",
"# おーぷん2ちゃんねる対話コーパス",
"## Dataset Details",
"### Dataset Description\n\nおーぷん2ちゃんねる対話コーパス を Huggingface Datasets 向けに変換したものになります。\n\n- Curated by: \n- Language: Japanese\n- License: Apache-2.0",
"### Dataset Sources\n\n- Repository: URL",
"## Dataset Structure\n\n- 'all-corpus': 'livejupiter', 'news4vip', 'newsplus' サブセットを連結したもの\n - 'dialogue': 対話データ ('list[dict]')\n - 'speaker': 話者番号。'1' または '2'。\n - 'content': 発言内容\n - 'board': 連結元のサブセット名\n\n- 'livejupiter': オリジナルのデータセットでの 'URL' から変換されたデータ。\n - 'dialogue': 対話データ ('list[dict]')\n - 'speaker': 話者番号。'1' または '2'。\n - 'content': 発言内容\n- 'news4vip': オリジナルのデータセットでの 'URL' から変換されたデータ。\n - 構造は同上\n- 'newsplus': オリジナルのデータセットでの 'URL' から変換されたデータ。\n - 構造は同上\n \n- 'ranking': 応答順位付けタスク用データ (オリジナルデータセットでの 'URL')\n - 'train' と 'test' split があり、それぞれはオリジナルデータセットの 'URL' と 'URL' に対応します。\n - 'dialogue': 対話データ ('list[dict]')\n - 'speaker': 話者番号。'1' または '2'。\n - 'content': 発言内容\n - 'next': 対話の次に続く正解の応答 ('dict')\n - 'speaker': 話者番号。'1' または '2'\n - 'content': 発言内容\n - 'random': ランダムに選ばれた応答 9 個 ('list[str]')\n\nまた、'all-corpus', 'livejupiter', 'news4vip', 'newsplus' にはそれぞれ名前に '-cleaned' が付与されたバージョンがあり、これらのサブセットではオリジナルのデータセットで配布されていた NG ワードリストを利用してフィルタリングされたものです。\n\nオリジナルのデータセットでは各発言内の改行は '__BR__' に置換されていますが、このデータセットではすべて '\\n' に置き換えられています。",
"## Dataset Creation",
"### Source Data\n\n(オリジナルのデータセットの説明より)\n> おーぷん2ちゃんねるの「なんでも実況(ジュピター)」「ニュー速VIP」「ニュース速報+」の3つの掲示板をクロールして作成した対話コーパスです. おーぷん2ちゃんねる開設時から2019年7月20日までのデータを使用して作成しました.",
"#### Data Collection and Processing\n\nオリジナルのデータセット を参照。",
"#### Personal and Sensitive Information\n\n'-cleaned' ではないサブセットでは、非常に不適切な表現が多いため注意が必要です。",
"## Usage"
]
| [
65,
13,
4,
48,
12,
517,
5,
81,
16,
29,
3
]
| [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-1M<n<10M #language-Japanese #license-apache-2.0 #not-for-all-audiences #region-us \n# おーぷん2ちゃんねる対話コーパス## Dataset Details### Dataset Description\n\nおーぷん2ちゃんねる対話コーパス を Huggingface Datasets 向けに変換したものになります。\n\n- Curated by: \n- Language: Japanese\n- License: Apache-2.0### Dataset Sources\n\n- Repository: URL"
]
|
42c30ae94dd47f03522b3a8cf74e97cd2e4614f1 |
<h1>Dataset Card for 16th Century(?) Black and White Style</h1>
Dataset used to train/finetune a black and white print style
Captions are generated by hand with the assistance of BLIP.
Images were sourced from:
</br> https://openclipart.org/artist/j4p4n
</br> https://openclipart.org/artist/johnny_automatic
</br> https://openclipart.org/artist/SnipsAndClips
Text file filenames correspond image file filenames as captions. | joshuajewell/Openclipart-Oldstyle | [
"task_categories:text-to-image",
"annotations_creators:human generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n=103",
"source_datasets:https://openclipart.org/artist/j4p4n",
"source_datasets:https://openclipart.org/artist/johnny_automatic",
"source_datasets:https://openclipart.org/artist/SnipsAndClips",
"language:en",
"license:cc0-1.0",
"region:us"
]
| 2023-10-31T17:16:30+00:00 | {"annotations_creators": ["human generated"], "language_creators": ["other"], "language": ["en"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "size_categories": ["n=103"], "source_datasets": ["https://openclipart.org/artist/j4p4n", "https://openclipart.org/artist/johnny_automatic", "https://openclipart.org/artist/SnipsAndClips"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Black and White Print Images", "tags": []} | 2023-10-31T20:14:00+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=103 #source_datasets-https-//openclipart.org/artist/j4p4n #source_datasets-https-//openclipart.org/artist/johnny_automatic #source_datasets-https-//openclipart.org/artist/SnipsAndClips #language-English #license-cc0-1.0 #region-us
|
<h1>Dataset Card for 16th Century(?) Black and White Style</h1>
Dataset used to train/finetune a black and white print style
Captions are generated by hand with the assistance of BLIP.
Images were sourced from:
</br> URL
</br> URL
</br> URL
Text file filenames correspond image file filenames as captions. | []
| [
"TAGS\n#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=103 #source_datasets-https-//openclipart.org/artist/j4p4n #source_datasets-https-//openclipart.org/artist/johnny_automatic #source_datasets-https-//openclipart.org/artist/SnipsAndClips #language-English #license-cc0-1.0 #region-us \n"
]
| [
141
]
| [
"passage: TAGS\n#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=103 #source_datasets-https-//openclipart.org/artist/j4p4n #source_datasets-https-//openclipart.org/artist/johnny_automatic #source_datasets-https-//openclipart.org/artist/SnipsAndClips #language-English #license-cc0-1.0 #region-us \n"
]
|
53744f30ddad08bd9934ba41d77d326956a85122 | # Dataset Card for "tokenized_t5_context_len_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yardeny/tokenized_t5_context_len_64 | [
"region:us"
]
| 2023-10-31T17:19:51+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 10163799114, "num_examples": 80462898}], "download_size": 3657002292, "dataset_size": 10163799114}} | 2023-10-31T17:34:32+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tokenized_t5_context_len_64"
More Information needed | [
"# Dataset Card for \"tokenized_t5_context_len_64\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized_t5_context_len_64\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tokenized_t5_context_len_64\"\n\nMore Information needed"
]
|
eec1115a3cb6d3811ecf0b69438274312f150b67 | this raw picture of dataset : <br>
[https://drive.google.com/file/d/1Sn7p-8TM36jKx7QMjClh_xlxTu5_TG1b/view?usp=sharing](https://drive.google.com/file/d/1Sn7p-8TM36jKx7QMjClh_xlxTu5_TG1b/view?usp=sharing)
please mention me if you make better dataset from my dataset and lets work together collecting and merge dataset for goodness of ai art community. | faizalnf1800/sidebangs_hairstyle_and_earring_anime_woman | [
"license:mit",
"region:us"
]
| 2023-10-31T17:23:51+00:00 | {"license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "additional_feature", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6396991.0, "num_examples": 70}], "download_size": 6311020, "dataset_size": 6396991.0}} | 2023-11-08T14:03:32+00:00 | []
| []
| TAGS
#license-mit #region-us
| this raw picture of dataset : <br>
URL
please mention me if you make better dataset from my dataset and lets work together collecting and merge dataset for goodness of ai art community. | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
7f9d4f237bd7496914f430fa600c73017331885f | This dataset contains a copy of the `cais/mmlu` HF dataset but without the `auxiliary_train` split that takes a long time to generate again each time when loading multiple subsets of the dataset.
Please visit https://huggingface.co/datasets/cais/mmlu for more information on the MMLU dataset. | hails/mmlu_no_train | [
"task_categories:question-answering",
"language:en",
"license:mit",
"region:us"
]
| 2023-10-31T17:25:54+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["question-answering"], "pretty_name": "MMLU loader with no auxiliary train set", "dataset_info": {"config_name": "all", "features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}], "splits": [{"name": "test", "num_bytes": 6967453, "num_examples": 14042}, {"name": "validation", "num_bytes": 763484, "num_examples": 1531}, {"name": "dev", "num_bytes": 125353, "num_examples": 285}], "download_size": 3987384, "dataset_size": 7856290}, "configs": [{"config_name": "all", "data_files": [{"split": "test", "path": "all/test-*"}, {"split": "validation", "path": "all/validation-*"}, {"split": "dev", "path": "all/dev-*"}]}]} | 2024-01-22T20:46:30+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #language-English #license-mit #region-us
| This dataset contains a copy of the 'cais/mmlu' HF dataset but without the 'auxiliary_train' split that takes a long time to generate again each time when loading multiple subsets of the dataset.
Please visit URL for more information on the MMLU dataset. | []
| [
"TAGS\n#task_categories-question-answering #language-English #license-mit #region-us \n"
]
| [
27
]
| [
"passage: TAGS\n#task_categories-question-answering #language-English #license-mit #region-us \n"
]
|
c8772375ea10e39daa6ea3babb8a69c9e09b371e | # Dataset Card for "patient_info"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NUS-IDS/patient_info | [
"region:us"
]
| 2023-10-31T17:27:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "anxiety", "path": "data/anxiety-*"}, {"split": "depression", "path": "data/depression-*"}, {"split": "ptsd", "path": "data/ptsd-*"}, {"split": "bipolar", "path": "data/bipolar-*"}]}], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "comments", "list": [{"name": "author_from", "sequence": "string"}, {"name": "author_to", "sequence": "string"}, {"name": "comments", "list": [{"name": "author_from", "sequence": "string"}, {"name": "author_to", "sequence": "string"}, {"name": "content", "sequence": "string"}, {"name": "date", "sequence": "string"}]}, {"name": "content", "sequence": "string"}, {"name": "date", "sequence": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "anxiety", "num_bytes": 143006120, "num_examples": 27393}, {"name": "depression", "num_bytes": 49953142, "num_examples": 6982}, {"name": "ptsd", "num_bytes": 1626957, "num_examples": 349}, {"name": "bipolar", "num_bytes": 3087512, "num_examples": 474}], "download_size": 97056610, "dataset_size": 197673731}} | 2023-10-31T17:28:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "patient_info"
More Information needed | [
"# Dataset Card for \"patient_info\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"patient_info\"\n\nMore Information needed"
]
| [
6,
13
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"patient_info\"\n\nMore Information needed"
]
|
5f22860f627390497cd2b6368b64c04d98000242 | # Dataset Card for "beyond_blue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NUS-IDS/beyond_blue | [
"region:us"
]
| 2023-10-31T17:29:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "anxiety", "path": "data/anxiety-*"}, {"split": "depression", "path": "data/depression-*"}, {"split": "ptsd", "path": "data/ptsd-*"}]}], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "comments", "list": [{"name": "author", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "anxiety", "num_bytes": 56172807, "num_examples": 6943}, {"name": "depression", "num_bytes": 60224734, "num_examples": 6008}, {"name": "ptsd", "num_bytes": 21141031, "num_examples": 1816}], "download_size": 68731517, "dataset_size": 137538572}} | 2023-10-31T17:29:31+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "beyond_blue"
More Information needed | [
"# Dataset Card for \"beyond_blue\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"beyond_blue\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"beyond_blue\"\n\nMore Information needed"
]
|
6536bf51180a0ac8ab9748a0d4144f4507eb158c | # Dataset Card for "processed_t5_context_len_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yardeny/processed_t5_context_len_64 | [
"region:us"
]
| 2023-10-31T17:39:36+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 9745169624.0, "num_examples": 29710883}], "download_size": 3781295100, "dataset_size": 9745169624.0}} | 2023-10-31T17:54:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "processed_t5_context_len_64"
More Information needed | [
"# Dataset Card for \"processed_t5_context_len_64\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_t5_context_len_64\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_t5_context_len_64\"\n\nMore Information needed"
]
|
4c500410d2b2602e86f60f05f17bf26a4cc570f7 | This dataset was created by automatically translating part of "Anthropic/hh-rlhf" into Japanese, and selected for single turn conversations.
You can use this dataset for RLHF and DPO.
hh-rlhf repository
https://github.com/anthropics/hh-rlhf
Anthropic/hh-rlhf
https://huggingface.co/datasets/Anthropic/hh-rlhf | kunishou/hh-rlhf-49k-ja-single-turn | [
"license:mit",
"region:us"
]
| 2023-10-31T17:47:50+00:00 | {"license": "mit"} | 2023-11-02T14:30:34+00:00 | []
| []
| TAGS
#license-mit #region-us
| This dataset was created by automatically translating part of "Anthropic/hh-rlhf" into Japanese, and selected for single turn conversations.
You can use this dataset for RLHF and DPO.
hh-rlhf repository
URL
Anthropic/hh-rlhf
URL | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
23afb6aec278fbdeecb795ee3ff2e700dc580a99 | # Dataset Card for "test-hello"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eltoai/test-hello | [
"region:us"
]
| 2023-10-31T18:34:15+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "data", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 45780, "num_examples": 1000}], "download_size": 19559, "dataset_size": 45780}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T18:34:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "test-hello"
More Information needed | [
"# Dataset Card for \"test-hello\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"test-hello\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"test-hello\"\n\nMore Information needed"
]
|
6725ab4d90f3a1ecfc15d51ce21c5e01fdca07fc | # Dataset Card for "task_prediction_train2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/task_prediction_train2 | [
"region:us"
]
| 2023-10-31T18:48:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "task_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 659890949, "num_examples": 5663600}, {"name": "validation", "num_bytes": 7823929, "num_examples": 60002}], "download_size": 148156628, "dataset_size": 667714878}} | 2023-10-31T18:48:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "task_prediction_train2"
More Information needed | [
"# Dataset Card for \"task_prediction_train2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"task_prediction_train2\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"task_prediction_train2\"\n\nMore Information needed"
]
|
a9e0893a71769382ca02289f548f6c11eb9c0431 | # Dataset Card for "auto-batch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| zeio/auto-batch | [
"region:us"
]
| 2023-10-31T19:05:43+00:00 | {"dataset_info": [{"config_name": "spoken", "features": [{"name": "title", "dtype": "string"}, {"name": "speech", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "topics", "list": [{"name": "posts", "list": [{"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 4378815049786.86, "num_examples": 875140}], "download_size": 58030117749, "dataset_size": 4378815049786.86}, {"config_name": "written", "features": [{"name": "title", "dtype": "string"}, {"name": "topics", "list": [{"name": "posts", "list": [{"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 23170678001, "num_examples": 875140}], "download_size": 11291624575, "dataset_size": 23170678001}], "configs": [{"config_name": "spoken", "data_files": [{"split": "train", "path": "spoken/train-*"}]}, {"config_name": "written", "data_files": [{"split": "train", "path": "written/train-*"}]}]} | 2023-12-10T19:41:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "auto-batch"
More Information needed
| [
"# Dataset Card for \"auto-batch\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"auto-batch\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"auto-batch\"\n\nMore Information needed"
]
|
15e35cd285dae08e7340ff2971ac3402eada46d8 |
# Dataset card for pale
## Table of contents
- [Dataset description](#dataset-description)
- [Dataset summary](#dataset-summary)
- [Dataset structure](#dataset-structure)
- [Dataset instance](#dataset-instance)
- [Dataset fields](#dataset-fields)
## Dataset description
- **Homepage:** [pale homepage](https://huggingface.co/datasets/zeio/pale)
- **Repository:** [pale repository](https://huggingface.co/datasets/zeio/pale)
- **Point of contact:** [Zeio Nara](mailto:[email protected])
- **Dataset version:** `30.10.2023`
### Dataset summary
This dataset contains league of legends champions' quotes parsed from [fandom](https://leagueoflegends.fandom.com).
See dataset usage example [at google colab](https://cutt.ly/3wEKDUI9).
The dataset is available in the following configurations:
1. `vanilla` - all data pulled from the website without significant modifications apart from the web page structure parsing;
1. `quotes` - truncated version of the corpus, which does't contain sound effects;
1. `annotated` - an extended version of the full configuration with a couple of additional columns with labels;
1. `pulled` - same as vanilla, but sound files have been pulled from the website, and `source` column is replaced with `sound`.
## Dataset structure
### Data instance
An example of an entry from the dataset is given below:
```json
{
"header": "Attack",
"subheader": "Attacking",
"text": "Kindred: \"The masks of the Kindred seek you!\"",
"source": "https://static.wikia.nocookie.net/leagueoflegends/images/1/12/Kindred_Original_Passive_Mark_Enemy_6.ogg/revision/latest?cb=20221204121356",
"champion": "kindred"
}
```
### Data fields
Each dataset entry therefore consists of the following fields:
- `header` - main category of the text;
- `subheader` - secondary category of the text (none in some cases);
- `text` - text said by the champion or description of sound made by the champion;
- `source` - link to the audio file (only `vanilla` configuration);
- `champion` - name of the champion in lowercase;
- `quote` - binary field displaying whether corresponding text contains quote or not (only `annotated` configuration);
- `sound` - audio data for the entry (only `pulled` configuration).
| zeio/auto-pale | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:automatic-speech-recognition",
"language_creators:crowdsourced",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"gaming",
"region:us"
]
| 2023-10-31T19:18:21+00:00 | {"language_creators": ["crowdsourced"], "language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text-classification", "automatic-speech-recognition"], "pretty_name": "pale", "tags": ["gaming"], "annotation_creators": ["crowdsourced"], "configs": [{"config_name": "quotes", "data_files": [{"split": "train", "path": "quotes/*.parquet"}], "default": true}, {"config_name": "vanilla", "data_files": [{"split": "train", "path": "vanilla/*.parquet"}], "default": false}, {"config_name": "annotated", "data_files": [{"split": "train", "path": "annotated/*.parquet"}], "default": false}, {"config_name": "pulled", "data_files": [{"split": "train", "path": "pulled/*.parquet"}], "default": false}], "dataset_info": [{"config_name": "pulled", "features": [{"name": "header", "dtype": "string"}, {"name": "subheader", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "sound", "dtype": {"audio": {"sampling_rate": 44100}}}, {"name": "champion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4621864509.2, "num_examples": 67575}], "download_size": 2557617774, "dataset_size": 4621864509.2}, {"config_name": "quotes", "features": [{"name": "header", "dtype": "string"}, {"name": "subheader", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "champion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2499768, "num_examples": 31001}], "download_size": 947409, "dataset_size": 2499768}, {"config_name": "vanilla", "features": [{"name": "header", "dtype": "string"}, {"name": "subheader", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "champion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14430202, "num_examples": 67575}], "download_size": 2675223, "dataset_size": 14430202}, {"config_name": "annotated", "features": [{"name": "header", "dtype": "string"}, {"name": "subheader", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "champion", "dtype": "string"}, {"name": "quote", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 14339149, "num_examples": 67575}], "download_size": 2681173, "dataset_size": 14339149}]} | 2023-10-31T21:25:58+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-generation #task_categories-text-classification #task_categories-automatic-speech-recognition #language_creators-crowdsourced #size_categories-10K<n<100K #language-English #license-apache-2.0 #gaming #region-us
|
# Dataset card for pale
## Table of contents
- Dataset description
- Dataset summary
- Dataset structure
- Dataset instance
- Dataset fields
## Dataset description
- Homepage: pale homepage
- Repository: pale repository
- Point of contact: Zeio Nara
- Dataset version: '30.10.2023'
### Dataset summary
This dataset contains league of legends champions' quotes parsed from fandom.
See dataset usage example at google colab.
The dataset is available in the following configurations:
1. 'vanilla' - all data pulled from the website without significant modifications apart from the web page structure parsing;
1. 'quotes' - truncated version of the corpus, which does't contain sound effects;
1. 'annotated' - an extended version of the full configuration with a couple of additional columns with labels;
1. 'pulled' - same as vanilla, but sound files have been pulled from the website, and 'source' column is replaced with 'sound'.
## Dataset structure
### Data instance
An example of an entry from the dataset is given below:
### Data fields
Each dataset entry therefore consists of the following fields:
- 'header' - main category of the text;
- 'subheader' - secondary category of the text (none in some cases);
- 'text' - text said by the champion or description of sound made by the champion;
- 'source' - link to the audio file (only 'vanilla' configuration);
- 'champion' - name of the champion in lowercase;
- 'quote' - binary field displaying whether corresponding text contains quote or not (only 'annotated' configuration);
- 'sound' - audio data for the entry (only 'pulled' configuration).
| [
"# Dataset card for pale",
"## Table of contents\n\n- Dataset description\n - Dataset summary\n- Dataset structure\n - Dataset instance\n - Dataset fields",
"## Dataset description\n\n- Homepage: pale homepage\n- Repository: pale repository\n- Point of contact: Zeio Nara\n- Dataset version: '30.10.2023'",
"### Dataset summary\n\nThis dataset contains league of legends champions' quotes parsed from fandom.\nSee dataset usage example at google colab.\nThe dataset is available in the following configurations:\n\n1. 'vanilla' - all data pulled from the website without significant modifications apart from the web page structure parsing;\n1. 'quotes' - truncated version of the corpus, which does't contain sound effects;\n1. 'annotated' - an extended version of the full configuration with a couple of additional columns with labels;\n1. 'pulled' - same as vanilla, but sound files have been pulled from the website, and 'source' column is replaced with 'sound'.",
"## Dataset structure",
"### Data instance\n\nAn example of an entry from the dataset is given below:",
"### Data fields\n\nEach dataset entry therefore consists of the following fields:\n\n- 'header' - main category of the text;\n- 'subheader' - secondary category of the text (none in some cases);\n- 'text' - text said by the champion or description of sound made by the champion;\n- 'source' - link to the audio file (only 'vanilla' configuration);\n- 'champion' - name of the champion in lowercase;\n- 'quote' - binary field displaying whether corresponding text contains quote or not (only 'annotated' configuration);\n- 'sound' - audio data for the entry (only 'pulled' configuration)."
]
| [
"TAGS\n#task_categories-text-generation #task_categories-text-classification #task_categories-automatic-speech-recognition #language_creators-crowdsourced #size_categories-10K<n<100K #language-English #license-apache-2.0 #gaming #region-us \n",
"# Dataset card for pale",
"## Table of contents\n\n- Dataset description\n - Dataset summary\n- Dataset structure\n - Dataset instance\n - Dataset fields",
"## Dataset description\n\n- Homepage: pale homepage\n- Repository: pale repository\n- Point of contact: Zeio Nara\n- Dataset version: '30.10.2023'",
"### Dataset summary\n\nThis dataset contains league of legends champions' quotes parsed from fandom.\nSee dataset usage example at google colab.\nThe dataset is available in the following configurations:\n\n1. 'vanilla' - all data pulled from the website without significant modifications apart from the web page structure parsing;\n1. 'quotes' - truncated version of the corpus, which does't contain sound effects;\n1. 'annotated' - an extended version of the full configuration with a couple of additional columns with labels;\n1. 'pulled' - same as vanilla, but sound files have been pulled from the website, and 'source' column is replaced with 'sound'.",
"## Dataset structure",
"### Data instance\n\nAn example of an entry from the dataset is given below:",
"### Data fields\n\nEach dataset entry therefore consists of the following fields:\n\n- 'header' - main category of the text;\n- 'subheader' - secondary category of the text (none in some cases);\n- 'text' - text said by the champion or description of sound made by the champion;\n- 'source' - link to the audio file (only 'vanilla' configuration);\n- 'champion' - name of the champion in lowercase;\n- 'quote' - binary field displaying whether corresponding text contains quote or not (only 'annotated' configuration);\n- 'sound' - audio data for the entry (only 'pulled' configuration)."
]
| [
82,
6,
26,
39,
156,
4,
17,
150
]
| [
"passage: TAGS\n#task_categories-text-generation #task_categories-text-classification #task_categories-automatic-speech-recognition #language_creators-crowdsourced #size_categories-10K<n<100K #language-English #license-apache-2.0 #gaming #region-us \n# Dataset card for pale## Table of contents\n\n- Dataset description\n - Dataset summary\n- Dataset structure\n - Dataset instance\n - Dataset fields## Dataset description\n\n- Homepage: pale homepage\n- Repository: pale repository\n- Point of contact: Zeio Nara\n- Dataset version: '30.10.2023'### Dataset summary\n\nThis dataset contains league of legends champions' quotes parsed from fandom.\nSee dataset usage example at google colab.\nThe dataset is available in the following configurations:\n\n1. 'vanilla' - all data pulled from the website without significant modifications apart from the web page structure parsing;\n1. 'quotes' - truncated version of the corpus, which does't contain sound effects;\n1. 'annotated' - an extended version of the full configuration with a couple of additional columns with labels;\n1. 'pulled' - same as vanilla, but sound files have been pulled from the website, and 'source' column is replaced with 'sound'.## Dataset structure### Data instance\n\nAn example of an entry from the dataset is given below:### Data fields\n\nEach dataset entry therefore consists of the following fields:\n\n- 'header' - main category of the text;\n- 'subheader' - secondary category of the text (none in some cases);\n- 'text' - text said by the champion or description of sound made by the champion;\n- 'source' - link to the audio file (only 'vanilla' configuration);\n- 'champion' - name of the champion in lowercase;\n- 'quote' - binary field displaying whether corresponding text contains quote or not (only 'annotated' configuration);\n- 'sound' - audio data for the entry (only 'pulled' configuration)."
]
|
5c58be069dd510b9a72f8215694515b0006735ad | # Dataset Card for "task_prediction_train3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/task_prediction_train3 | [
"region:us"
]
| 2023-10-31T19:33:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "task_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 659890949, "num_examples": 5663600}, {"name": "validation", "num_bytes": 7823929, "num_examples": 60002}, {"name": "test", "num_bytes": 153998, "num_examples": 2057}], "download_size": 148209849, "dataset_size": 667868876}} | 2023-10-31T19:33:36+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "task_prediction_train3"
More Information needed | [
"# Dataset Card for \"task_prediction_train3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"task_prediction_train3\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"task_prediction_train3\"\n\nMore Information needed"
]
|
826a6bdd0a23c6c0aaba0fca8d7c4dbb9e01fd8c |
<h1>Dataset Card for a Black and White Sharpie Style</h1>
Dataset used to train/finetune a black and white sharpie style
Captions are generated by hand with the assistance of BLIP.
Images were hand drawn.
Text file filenames correspond image file filenames as captions. | joshuajewell/32000-BlackSharpie | [
"task_categories:text-to-image",
"annotations_creators:human generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n=33",
"language:en",
"license:cc0-1.0",
"region:us"
]
| 2023-10-31T20:08:08+00:00 | {"annotations_creators": ["human generated"], "language_creators": ["other"], "language": ["en"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "size_categories": ["n=33"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Black Sharpie", "tags": []} | 2023-10-31T20:12:40+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=33 #language-English #license-cc0-1.0 #region-us
|
<h1>Dataset Card for a Black and White Sharpie Style</h1>
Dataset used to train/finetune a black and white sharpie style
Captions are generated by hand with the assistance of BLIP.
Images were hand drawn.
Text file filenames correspond image file filenames as captions. | []
| [
"TAGS\n#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=33 #language-English #license-cc0-1.0 #region-us \n"
]
| [
67
]
| [
"passage: TAGS\n#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=33 #language-English #license-cc0-1.0 #region-us \n"
]
|
b52b00b1e46c821e42e5f5e6939501b46a9e9f29 |
<h1>Dataset Card for Feng Zikai</h1>
Dataset used to train/finetune in the art style of artist Feng Zikai
</br>Captions are generated by hand with the assistance of BLIP.
Images sourced from: http://www.chinaonlinemuseum.com/gallery-feng-zikai.php
Text file filenames correspond image file filenames as captions. | joshuajewell/FengZikai | [
"task_categories:text-to-image",
"annotations_creators:human generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n=40",
"language:en",
"license:unknown",
"region:us"
]
| 2023-10-31T20:15:16+00:00 | {"annotations_creators": ["human generated"], "language_creators": ["other"], "language": ["en"], "license": "unknown", "multilinguality": ["monolingual"], "size_categories": ["n=40"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Black Sharpie", "tags": []} | 2023-11-03T03:07:51+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=40 #language-English #license-unknown #region-us
|
<h1>Dataset Card for Feng Zikai</h1>
Dataset used to train/finetune in the art style of artist Feng Zikai
</br>Captions are generated by hand with the assistance of BLIP.
Images sourced from: URL
Text file filenames correspond image file filenames as captions. | []
| [
"TAGS\n#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=40 #language-English #license-unknown #region-us \n"
]
| [
66
]
| [
"passage: TAGS\n#task_categories-text-to-image #annotations_creators-human generated #language_creators-other #multilinguality-monolingual #size_categories-n=40 #language-English #license-unknown #region-us \n"
]
|
94d420ea8eaa08e4022835e6b5986bc478b732b5 | # Dataset Card for "shEMO_transcripts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | minoosh/shEMO_transcripts | [
"region:us"
]
| 2023-10-31T20:20:26+00:00 | {"dataset_info": {"features": [{"name": "transcription", "dtype": "string"}, {"name": "emotion", "dtype": {"class_label": {"names": {"0": "A", "1": "H", "2": "N", "3": "S", "4": "W", "5": "F"}}}}], "splits": [{"name": "train", "num_bytes": 255721.6, "num_examples": 2400}, {"name": "test", "num_bytes": 31965.2, "num_examples": 300}, {"name": "valid", "num_bytes": 31965.2, "num_examples": 300}], "download_size": 173563, "dataset_size": 319652.0}} | 2023-10-31T20:20:42+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "shEMO_transcripts"
More Information needed | [
"# Dataset Card for \"shEMO_transcripts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"shEMO_transcripts\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"shEMO_transcripts\"\n\nMore Information needed"
]
|
61d85260d263e771ee5506e3dd4aab7a7a233b08 | # Dataset Card for "MyPatternDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MaxReynolds/MyPatternDataset | [
"region:us"
]
| 2023-10-31T21:05:24+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1030683.0, "num_examples": 38}], "download_size": 1018065, "dataset_size": 1030683.0}} | 2023-11-15T23:29:59+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "MyPatternDataset"
More Information needed | [
"# Dataset Card for \"MyPatternDataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"MyPatternDataset\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"MyPatternDataset\"\n\nMore Information needed"
]
|
b91e3bd6de3eb7edaa5592352e39703e31d4590d | # Dataset Card for "QA_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rkdeva/QA_Dataset | [
"region:us"
]
| 2023-10-31T21:08:02+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 252345, "num_examples": 103}], "download_size": 112834, "dataset_size": 252345}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T21:08:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "QA_Dataset"
More Information needed | [
"# Dataset Card for \"QA_Dataset\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"QA_Dataset\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"QA_Dataset\"\n\nMore Information needed"
]
|
5af68da7df24a6b5c488e29a377acefdc703f4f7 | # Dataset Card for "dmae_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Augusto777/dmae_test | [
"region:us"
]
| 2023-10-31T21:24:19+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "avanzada", "1": "leve", "2": "moderada", "3": "no dmae"}}}}], "splits": [{"name": "test", "num_bytes": 8640689.0, "num_examples": 20}], "download_size": 8641113, "dataset_size": 8640689.0}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-10-31T21:49:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dmae_test"
More Information needed | [
"# Dataset Card for \"dmae_test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dmae_test\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dmae_test\"\n\nMore Information needed"
]
|
5d4c07796f7d5f8fbae6d78e5f52490f474b8f61 | # Dataset Card for "e0cc5f8f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | result-kand2-sdxl-wuerst-karlo/e0cc5f8f | [
"region:us"
]
| 2023-10-31T22:47:04+00:00 | {"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 154, "num_examples": 10}], "download_size": 1307, "dataset_size": 154}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-31T22:47:06+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "e0cc5f8f"
More Information needed | [
"# Dataset Card for \"e0cc5f8f\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"e0cc5f8f\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"e0cc5f8f\"\n\nMore Information needed"
]
|
cca76d14628e17347fcb5ae222c4dad49bec0cfa | bfolder | makvr/bsaib | [
"region:us"
]
| 2023-10-31T22:52:40+00:00 | {} | 2023-10-31T23:00:00+00:00 | []
| []
| TAGS
#region-us
| bfolder | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
f87cd83c1d636deddcc17a93132504475074e315 | # Dataset Card for P3_0.5
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```bash
{
'choices': [ "Yes", "No" ],
'input': "Given that No Weapons of Mass Destruction Found in Iraq Yet. Does it follow that Weapons of Mass Destruction Found in Iraq. Yes or no?",
'label': "No",
'dataset': "rte",
'category': "nli",
'prompt_template': "super_glue_rte_does_it_follow_that"
}
```
To check all the prompted examples, you can use the [Promptsource hosted tool](http://bigscience.huggingface.co/promptsource) and choose the `Prompted dataset viewer` mode in the left panel.
### Data Fields
The data fields are the same among all splits:
- `choices`: the choices (in natural language) available to the model
- `input`: the natural language input fed to the model
- `label`: the natural language target that the model has to generate
- `dataset`: the dataset that the data are from
- `category`: the NLP task it belongs to
- `prompt_template`: the prompt template used to form the input
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | simonycl/p3_0.5_dataset | [
"language:en",
"region:us"
]
| 2023-10-31T23:52:01+00:00 | {"language": ["en"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "choices", "sequence": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "prompt_template", "dtype": "string"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 196170501, "num_examples": 304955}, {"name": "test", "num_bytes": 20170043, "num_examples": 17255}], "download_size": 88640049, "dataset_size": 216340544}} | 2023-12-03T00:27:26+00:00 | []
| [
"en"
]
| TAGS
#language-English #region-us
| # Dataset Card for P3_0.5
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
To check all the prompted examples, you can use the Promptsource hosted tool and choose the 'Prompted dataset viewer' mode in the left panel.
### Data Fields
The data fields are the same among all splits:
- 'choices': the choices (in natural language) available to the model
- 'input': the natural language input fed to the model
- 'label': the natural language target that the model has to generate
- 'dataset': the dataset that the data are from
- 'category': the NLP task it belongs to
- 'prompt_template': the prompt template used to form the input
More Information needed | [
"# Dataset Card for P3_0.5",
"## Dataset Structure",
"### Data Instances\n\nAn example of \"train\" looks as follows:\n\n\nTo check all the prompted examples, you can use the Promptsource hosted tool and choose the 'Prompted dataset viewer' mode in the left panel.",
"### Data Fields\n\nThe data fields are the same among all splits:\n- 'choices': the choices (in natural language) available to the model\n- 'input': the natural language input fed to the model\n- 'label': the natural language target that the model has to generate\n- 'dataset': the dataset that the data are from\n- 'category': the NLP task it belongs to\n- 'prompt_template': the prompt template used to form the input\n\n\nMore Information needed"
]
| [
"TAGS\n#language-English #region-us \n",
"# Dataset Card for P3_0.5",
"## Dataset Structure",
"### Data Instances\n\nAn example of \"train\" looks as follows:\n\n\nTo check all the prompted examples, you can use the Promptsource hosted tool and choose the 'Prompted dataset viewer' mode in the left panel.",
"### Data Fields\n\nThe data fields are the same among all splits:\n- 'choices': the choices (in natural language) available to the model\n- 'input': the natural language input fed to the model\n- 'label': the natural language target that the model has to generate\n- 'dataset': the dataset that the data are from\n- 'category': the NLP task it belongs to\n- 'prompt_template': the prompt template used to form the input\n\n\nMore Information needed"
]
| [
10,
9,
6,
56,
116
]
| [
"passage: TAGS\n#language-English #region-us \n# Dataset Card for P3_0.5## Dataset Structure### Data Instances\n\nAn example of \"train\" looks as follows:\n\n\nTo check all the prompted examples, you can use the Promptsource hosted tool and choose the 'Prompted dataset viewer' mode in the left panel.### Data Fields\n\nThe data fields are the same among all splits:\n- 'choices': the choices (in natural language) available to the model\n- 'input': the natural language input fed to the model\n- 'label': the natural language target that the model has to generate\n- 'dataset': the dataset that the data are from\n- 'category': the NLP task it belongs to\n- 'prompt_template': the prompt template used to form the input\n\n\nMore Information needed"
]
|
10545cf57619b14ed70e556039c71b4452e8ff92 | # Dataset Card for "synthetic_hebrew_medical_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cp500/synthetic_hebrew_medical_text | [
"region:us"
]
| 2023-11-01T00:01:59+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12496549, "num_examples": 4811}], "download_size": 5944521, "dataset_size": 12496549}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T00:05:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "synthetic_hebrew_medical_text"
More Information needed | [
"# Dataset Card for \"synthetic_hebrew_medical_text\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"synthetic_hebrew_medical_text\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"synthetic_hebrew_medical_text\"\n\nMore Information needed"
]
|
ca530c3d2dfd7a7c0325213aa0633fabb8454aaf | # Dataset Card for "enamine_leadlike"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/enamine_leadlike | [
"region:us"
]
| 2023-11-01T00:02:22+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31490993396, "num_examples": 672148662}], "download_size": 12563051169, "dataset_size": 31490993396}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T00:13:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "enamine_leadlike"
More Information needed | [
"# Dataset Card for \"enamine_leadlike\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"enamine_leadlike\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"enamine_leadlike\"\n\nMore Information needed"
]
|
792d06e147c2415c8c100cc92b525cbd2db6360d | # Dataset Card for "formatted-python-code-APR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JoaoJunior/formatted-python-code-APR | [
"region:us"
]
| 2023-11-01T01:21:53+00:00 | {"dataset_info": {"features": [{"name": "bugged", "dtype": "string"}, {"name": "fixed", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 939814360, "num_examples": 480777}], "download_size": 204217008, "dataset_size": 939814360}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T01:22:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "formatted-python-code-APR"
More Information needed | [
"# Dataset Card for \"formatted-python-code-APR\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"formatted-python-code-APR\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"formatted-python-code-APR\"\n\nMore Information needed"
]
|
9ce7e71b79fe688fcace5b7bb7761cda1168acaf | # Dataset Card for "formatted-java-preprocessed-code-APR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JoaoJunior/formatted-java-preprocessed-code-APR | [
"region:us"
]
| 2023-11-01T01:29:50+00:00 | {"dataset_info": {"features": [{"name": "bugged", "dtype": "string"}, {"name": "fixed", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5356711880, "num_examples": 996209}], "download_size": 683377373, "dataset_size": 5356711880}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T01:30:51+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "formatted-java-preprocessed-code-APR"
More Information needed | [
"# Dataset Card for \"formatted-java-preprocessed-code-APR\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"formatted-java-preprocessed-code-APR\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"formatted-java-preprocessed-code-APR\"\n\nMore Information needed"
]
|
de72d40cfb3424348d1e4c4372f4d468f2d35aac | # Dataset Card for "apt_pretrain_textbook_16k-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | communityai/apt_pretrain_textbook_16k-100 | [
"region:us"
]
| 2023-11-01T01:53:46+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10168718.903313944, "num_examples": 100}], "download_size": 5120308, "dataset_size": 10168718.903313944}} | 2023-11-01T01:53:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "apt_pretrain_textbook_16k-100"
More Information needed | [
"# Dataset Card for \"apt_pretrain_textbook_16k-100\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"apt_pretrain_textbook_16k-100\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"apt_pretrain_textbook_16k-100\"\n\nMore Information needed"
]
|
c931d059495e3793ce74f82e2a366b60b6654a69 | # AgentInstruct: Agent Instructs Large Language Models to be General Zero-Shot Reasoners
The repo for paper [Agent Instructs Large Language Models to be General Zero-Shot Reasoners](https://arxiv.org/abs/2310.03710).
<p align="center">
📃 <a href="https://arxiv.org/abs/2310.03710" target="_blank">[Paper]</a> • 💻 <a href="https://github.com/wang-research-lab/agentinstruct" target="_blank">[Github]</a> • 🤗 <a href="https://huggingface.co/datasets/WangResearchLab/AgentInstruct" target="_blank">[HuggingFace]</a> • 📌 <a href="https://nlp.wustl.edu/blog/2023-11-02-agentinstruct/" target="_blank">[Blog]</a> • 📽 <a href="http://cgraywang.github.io/files/2023-agentinstruct-slides(10min).pdf" target="_blank">[Slides]</a> • 📋 <a href="http://cgraywang.github.io/files/2023-agentinstruct-poster.pdf" target="_blank">[Poster]</a>
</p>
## AgentInstruct Instruction Dataset
The **AgentInstruct** Instruction dataset contains agent instructions for the 29 datasets used in the paper. We encourage you to use our AgentInstruct methodology detailed in the paper and code to produce more instructions and evaluate on more datasets.
We provide an example of using the instructions and producing more instructions with our AgentInstruct below. The AgentInstruct Instruction dataset we used in the code is [here](https://huggingface.co/datasets/WangResearchLab/AgentInstruct/blob/main/instructions.json).
## Installation
Begin by cloning this repository:
```
git clone --recurse-submodules https://github.com/wang-research-lab/agentinstruct.git
```
Then, run the following to implement zero-shot AgentInstruct into the HELM submodule:
```
cd agentinstruct
bash src/agentinstruct/reasoning/helm_updates/update_helm.sh
```
Now, add the following api keys to `prod_env/credentials.conf`: `openaiApiKey` (from [here](https://openai.com/blog/openai-api)) and `bingSubscriptionKey` (from [here](https://www.microsoft.com/en-us/bing/apis/bing-web-search-api)). Use the following format:
```
openaiApiKey: [your key here]
bingSubscriptionKey: [your key here]
```
We would recommend using a [Python 3.10 docker image](https://hub.docker.com/layers/library/python/3.10/images/sha256-6eff601177b9fdfb85f383089b97468910ff59be129019b1588dc3f9ac862204?context=explore).
```
docker network create mynetwork
docker pull python:3.10
docker run --network=mynetwork -v ~/agentinstruct:/code/agentinstruct -it python:3.10 bash
```
Next, create a virtual enviroment:
```
cd /code/agentinstruct
python3 -m pip install virtualenv
python3 -m virtualenv -p python3.10 helm-venv
source helm-venv/bin/activate
```
Run the following to download the necessary dependencies:
```
pip install -e src/agentinstruct/reasoning/helm
pip install -r requirements.txt
```
*Note*: For running other models (vicuna-13b, llama-2-7b-chat, llama-2-13b-chat, llama-2-70b-chat), you must also follow the instructions [here](src/agentinstruct/reasoning/serve/README.md).
## Replicating Main Results
To replicate the main results on 28 datasets (excludes NewsQA for its license restrictions, see [here](src/agentinstruct/reasoning/helm_updates/src/helm/benchmark/scenarios/newsqa_scenario.py)) with a specific model (gpt-3.5-turbo, llama-2-7b-chat, llama-2-13b-chat, llama-2-70b-chat, vicuna-13b), run:
```
bash scripts/gpt-3.5-turbo.sh
bash scripts/llama-2-7b-chat.sh
bash scripts/llama-2-13b-chat.sh
bash scripts/llama-2-70b-chat.sh
bash scripts/vicuna-13b.sh
```
Results will be stored in ```benchmark_outputs/runs/{model}-agentinstruct/results.csv```.
## Customizing your Run
There are three key components of the zero-shot AgentInstruct pipeline: (1) generating agent instructions, (2) running reasoning steps with the instructions, and (3) formatting the results. In this section, we will look at each component in detail, focusing on a single dataset: AddSub. Note that nothing here is specific to AddSub, and can be applied to any dataset, or even a combination of datasets!
### Generating Agent Instructions
First, to generate the agent instructions for AddSub, run the following:
```
bash scripts/generate_agent_instructions.sh scripts/run_specs/simple-gpt-3.5-turbo.conf addsub
```
We'll create a configuration file that specifies the run configuration. As an example, we'll look at the configuration file ```scripts/run_specs/simple-gpt-3.5-turbo.conf```, which specifies the configuration of running the AddSub dataset using GPT-3.5 Turbo:
```
entries: [
{description: "addsub:model=openai/gpt-3.5-turbo-0301,max_train_instances=0,instructions=agentinstruct", priority: 1}
]
```
The agent instructions for the AddSub dataset will be saved in ```instructions/addsub/instructions.json```. The agent's input, as well as the web sources used and intermediate prompts, will be saved under ```instructions/addsub/inputs.json``` and ```instructions/addsub/metadata.json``` respectively.
### Running Reasoning Steps
We'll use the same configuration file as above. To run reasoning steps with zero-shot AgentInstruct on AddSub, run the following:
```
bash scripts/run_reasoning.sh scripts/run_specs/simple-gpt-3.5-turbo.conf addsub 1000
```
The first two parameters are identical to those above, and the third represents the number of instances to run reasoning steps on. The results will be stored in ```benchmark_outputs/runs/addsub```.
*Note*: By default, zero-shot AgentInstruct reasoning will be done using the latest set of instructions generated. To run reasoning with the instructions used in the paper, run this script before the run_reasoning command:
```
python scripts/replicate.py
```
### Format Results
To easily format the evaluation results, run:
```
python src/agentinstruct/eval/format_results.py --suite addsub
```
The evaluation results will be saved in ```benchmark_output/runs/addsub/results.csv```. To see the full text output by instance, open ```benchmark_output/runs/addsub/'addsub:model=openai_gpt-3.5-turbo-0301,max_train_instances=0,instructions=agentinstruct'/scenario_state.json``` and search for ```full_text```.
*Note*: Normally, the results are formatted after all the run spec descriptions in the configuration file have been run. To see for a single run spec description, view:
```
benchmark_output/runs/addsub/'addsub:model=openai_gpt-3.5-turbo-0301,max_train_instances=0,instructions=agentinstruct'/stats.json
```
### All Together Now
To run the above entire AgentInstruct pipeline in one go, run:
```
bash scripts/run.sh scripts/run_specs/simple-gpt-3.5-turbo.conf addsub 1000
```
This will run all 3 steps outlined above, and store the result in ```benchmark_outputs/runs/addsub```.
## Arguments
In this section, we'll cover various important run arguments.
### Run Configuration Arguments
A run spec describes a specific dataset to run. For example, the run spec for AddSub used above is:
```
{description: "addsub:model=openai/gpt-3.5-turbo-0301,max_train_instances=0,instructions=agentinstruct", priority: 1}
```
| argument | description | options|
|----|----|----|
| `model` | Model to use for inference. | `local/vicuna-13b` <br> `local/llama-2-7b-chat` <br> `local/llama-2-13b-chat` <br> `local/llama-2-70b-chat` <br> `openai/gpt-3.5-turbo-0301` |
| `max_train_instances` | Number of few shot examples to prepend. Few Shot is not recommended. | int |
| `instructions` | Optional prompting method to use. `None` corresponds to standard zeroshot. | `agentinstruct` <br> `zeroshotcot` <br> `None` |
*Note*: Several datasets have additional argument to specify the specific subset or task.
### Generating Agent Instructions Arguments
The main script to generate agent instructions is ```scripts/generate_agent_instructions.sh```. It takes the following 2 positional arguments:
| argument | description | options|
|----|----|----|
| 1st | Path to run spec file. | str |
| 2nd | Suite name under which to save instructions. | str |
Internally, the agent instructions are generated by first running dataset preprocessing (in ```src/agentinstruct/agent/utils/dataset_preprocessing.py```) and then running the instruction generation (in ```src/agentinstruct/agent/agent_instr_generation.py```). These are combined in ```src/agentinstruct/agent/agent_pipeline.py``` and called by ```scripts/generate_agent_instructions.sh```. GPT-4 is used as the agent LLM as in our paper.
### Running Reasoning Arguments
The main script to run reasoning is ```scripts/run_reasoning.sh```, which internally calls `helm-run`. It takes the following 4 positional arguments, as well as a placeholder for any additional argument to pass to `helm-run`:
| argument | description | options|
|----|--------------------------------------------------------------------------------------|----|
| 1st | Path to run spec file. | str |
| 2nd | Suite name under which to save outputs. | str |
| 3rd | Maximum number of instances to run. | int |
| 4th | Maximum number of threads from which to send requests. Defaults to 8 for all models. | int |
| 5th | Place holder for any additional argument to pass to `helm-run`. | str |
### Outputting Results Arguments
The main script to format the results is ```src/agentinstruct/eval/format_results.py```. It takes a single named argument:
| argument | description | options|
|----|----|----|
| --suite | Suite name under which to find outputs. | str |
## Replicating Additional Results
To replicate the zero-shot (`zeroshot`) and zero-shot CoT (`zeroshot`) modes, run:
```
bash scripts/run_reasoning.sh scripts/run_specs/{mode}/{model}-{mode}.conf {model}-{mode} 1000 8
python src/agentinstruct/eval/format_results.py --suite {model}-{mode}
```
where `{mode}` is `zeroshot` or `zeroshotcot` and `{model}` is `vicuna-13b`, `llama-2-7b-chat`, `llama-2-13b-chat`, `llama-2-70b-chat`, or `gpt-3.5-turbo`.
*Note*: For standard zero-shot runs, pass `skip-expander` as the 5th positional argument.
## Citation
```bibtex
@article{crispino2023agent,
title={Agent Instructs Large Language Models to be General Zero-Shot Reasoners},
author={Crispino, Nicholas and Montgomery, Kyle and Zeng, Fankun and Song, Dawn and Wang, Chenguang},
journal={arXiv preprint arXiv:2310.03710},
year={2023}
}
``` | WangResearchLab/AgentInstruct | [
"size_categories:n<1K",
"language:en",
"arxiv:2310.03710",
"region:us"
]
| 2023-11-01T02:05:55+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "configs": [{"config_name": "default", "data_files": [{"split": "agentinstruct_instruction", "path": "instructions.parquet"}]}]} | 2023-11-02T23:42:21+00:00 | [
"2310.03710"
]
| [
"en"
]
| TAGS
#size_categories-n<1K #language-English #arxiv-2310.03710 #region-us
| AgentInstruct: Agent Instructs Large Language Models to be General Zero-Shot Reasoners
======================================================================================
The repo for paper Agent Instructs Large Language Models to be General Zero-Shot Reasoners.
[[Paper]](URL target=) • [[Github]](URL target=) • [[HuggingFace]](URL target=) • [[Blog]](URL target=) • [[Slides]](URL target=) • [[Poster]](URL target=)
AgentInstruct Instruction Dataset
---------------------------------
The AgentInstruct Instruction dataset contains agent instructions for the 29 datasets used in the paper. We encourage you to use our AgentInstruct methodology detailed in the paper and code to produce more instructions and evaluate on more datasets.
We provide an example of using the instructions and producing more instructions with our AgentInstruct below. The AgentInstruct Instruction dataset we used in the code is here.
Installation
------------
Begin by cloning this repository:
Then, run the following to implement zero-shot AgentInstruct into the HELM submodule:
Now, add the following api keys to 'prod\_env/URL': 'openaiApiKey' (from here) and 'bingSubscriptionKey' (from here). Use the following format:
We would recommend using a Python 3.10 docker image.
Next, create a virtual enviroment:
Run the following to download the necessary dependencies:
*Note*: For running other models (vicuna-13b, llama-2-7b-chat, llama-2-13b-chat, llama-2-70b-chat), you must also follow the instructions here.
Replicating Main Results
------------------------
To replicate the main results on 28 datasets (excludes NewsQA for its license restrictions, see here) with a specific model (gpt-3.5-turbo, llama-2-7b-chat, llama-2-13b-chat, llama-2-70b-chat, vicuna-13b), run:
Results will be stored in .
Customizing your Run
--------------------
There are three key components of the zero-shot AgentInstruct pipeline: (1) generating agent instructions, (2) running reasoning steps with the instructions, and (3) formatting the results. In this section, we will look at each component in detail, focusing on a single dataset: AddSub. Note that nothing here is specific to AddSub, and can be applied to any dataset, or even a combination of datasets!
### Generating Agent Instructions
First, to generate the agent instructions for AddSub, run the following:
We'll create a configuration file that specifies the run configuration. As an example, we'll look at the configuration file , which specifies the configuration of running the AddSub dataset using GPT-3.5 Turbo:
The agent instructions for the AddSub dataset will be saved in . The agent's input, as well as the web sources used and intermediate prompts, will be saved under and respectively.
### Running Reasoning Steps
We'll use the same configuration file as above. To run reasoning steps with zero-shot AgentInstruct on AddSub, run the following:
The first two parameters are identical to those above, and the third represents the number of instances to run reasoning steps on. The results will be stored in .
*Note*: By default, zero-shot AgentInstruct reasoning will be done using the latest set of instructions generated. To run reasoning with the instructions used in the paper, run this script before the run\_reasoning command:
### Format Results
To easily format the evaluation results, run:
The evaluation results will be saved in . To see the full text output by instance, open and search for .
*Note*: Normally, the results are formatted after all the run spec descriptions in the configuration file have been run. To see for a single run spec description, view:
### All Together Now
To run the above entire AgentInstruct pipeline in one go, run:
This will run all 3 steps outlined above, and store the result in .
Arguments
---------
In this section, we'll cover various important run arguments.
### Run Configuration Arguments
A run spec describes a specific dataset to run. For example, the run spec for AddSub used above is:
argument: 'model', description: Model to use for inference., options: 'local/vicuna-13b'
'local/llama-2-7b-chat'
'local/llama-2-13b-chat'
'local/llama-2-70b-chat'
'openai/gpt-3.5-turbo-0301'
argument: 'max\_train\_instances', description: Number of few shot examples to prepend. Few Shot is not recommended., options: int
argument: 'instructions', description: Optional prompting method to use. 'None' corresponds to standard zeroshot., options: 'agentinstruct'
'zeroshotcot'
'None'
*Note*: Several datasets have additional argument to specify the specific subset or task.
### Generating Agent Instructions Arguments
The main script to generate agent instructions is . It takes the following 2 positional arguments:
argument: 1st, description: Path to run spec file., options: str
argument: 2nd, description: Suite name under which to save instructions., options: str
Internally, the agent instructions are generated by first running dataset preprocessing (in ) and then running the instruction generation (in ). These are combined in and called by . GPT-4 is used as the agent LLM as in our paper.
### Running Reasoning Arguments
The main script to run reasoning is , which internally calls 'helm-run'. It takes the following 4 positional arguments, as well as a placeholder for any additional argument to pass to 'helm-run':
argument: 1st, description: Path to run spec file., options: str
argument: 2nd, description: Suite name under which to save outputs., options: str
argument: 3rd, description: Maximum number of instances to run., options: int
argument: 4th, description: Maximum number of threads from which to send requests. Defaults to 8 for all models., options: int
argument: 5th, description: Place holder for any additional argument to pass to 'helm-run'., options: str
### Outputting Results Arguments
The main script to format the results is . It takes a single named argument:
argument: --suite, description: Suite name under which to find outputs., options: str
Replicating Additional Results
------------------------------
To replicate the zero-shot ('zeroshot') and zero-shot CoT ('zeroshot') modes, run:
where '{mode}' is 'zeroshot' or 'zeroshotcot' and '{model}' is 'vicuna-13b', 'llama-2-7b-chat', 'llama-2-13b-chat', 'llama-2-70b-chat', or 'gpt-3.5-turbo'.
*Note*: For standard zero-shot runs, pass 'skip-expander' as the 5th positional argument.
| [
"### Generating Agent Instructions\n\n\nFirst, to generate the agent instructions for AddSub, run the following:\n\n\nWe'll create a configuration file that specifies the run configuration. As an example, we'll look at the configuration file , which specifies the configuration of running the AddSub dataset using GPT-3.5 Turbo:\n\n\nThe agent instructions for the AddSub dataset will be saved in . The agent's input, as well as the web sources used and intermediate prompts, will be saved under and respectively.",
"### Running Reasoning Steps\n\n\nWe'll use the same configuration file as above. To run reasoning steps with zero-shot AgentInstruct on AddSub, run the following:\n\n\nThe first two parameters are identical to those above, and the third represents the number of instances to run reasoning steps on. The results will be stored in .\n\n\n*Note*: By default, zero-shot AgentInstruct reasoning will be done using the latest set of instructions generated. To run reasoning with the instructions used in the paper, run this script before the run\\_reasoning command:",
"### Format Results\n\n\nTo easily format the evaluation results, run:\n\n\nThe evaluation results will be saved in . To see the full text output by instance, open and search for .\n\n\n*Note*: Normally, the results are formatted after all the run spec descriptions in the configuration file have been run. To see for a single run spec description, view:",
"### All Together Now\n\n\nTo run the above entire AgentInstruct pipeline in one go, run:\n\n\nThis will run all 3 steps outlined above, and store the result in .\n\n\nArguments\n---------\n\n\nIn this section, we'll cover various important run arguments.",
"### Run Configuration Arguments\n\n\nA run spec describes a specific dataset to run. For example, the run spec for AddSub used above is:\n\n\nargument: 'model', description: Model to use for inference., options: 'local/vicuna-13b' \n 'local/llama-2-7b-chat' \n 'local/llama-2-13b-chat' \n 'local/llama-2-70b-chat' \n 'openai/gpt-3.5-turbo-0301'\nargument: 'max\\_train\\_instances', description: Number of few shot examples to prepend. Few Shot is not recommended., options: int\nargument: 'instructions', description: Optional prompting method to use. 'None' corresponds to standard zeroshot., options: 'agentinstruct' \n 'zeroshotcot' \n 'None'\n\n\n*Note*: Several datasets have additional argument to specify the specific subset or task.",
"### Generating Agent Instructions Arguments\n\n\nThe main script to generate agent instructions is . It takes the following 2 positional arguments:\n\n\nargument: 1st, description: Path to run spec file., options: str\nargument: 2nd, description: Suite name under which to save instructions., options: str\n\n\nInternally, the agent instructions are generated by first running dataset preprocessing (in ) and then running the instruction generation (in ). These are combined in and called by . GPT-4 is used as the agent LLM as in our paper.",
"### Running Reasoning Arguments\n\n\nThe main script to run reasoning is , which internally calls 'helm-run'. It takes the following 4 positional arguments, as well as a placeholder for any additional argument to pass to 'helm-run':\n\n\nargument: 1st, description: Path to run spec file., options: str\nargument: 2nd, description: Suite name under which to save outputs., options: str\nargument: 3rd, description: Maximum number of instances to run., options: int\nargument: 4th, description: Maximum number of threads from which to send requests. Defaults to 8 for all models., options: int\nargument: 5th, description: Place holder for any additional argument to pass to 'helm-run'., options: str",
"### Outputting Results Arguments\n\n\nThe main script to format the results is . It takes a single named argument:\n\n\nargument: --suite, description: Suite name under which to find outputs., options: str\n\n\nReplicating Additional Results\n------------------------------\n\n\nTo replicate the zero-shot ('zeroshot') and zero-shot CoT ('zeroshot') modes, run:\n\n\nwhere '{mode}' is 'zeroshot' or 'zeroshotcot' and '{model}' is 'vicuna-13b', 'llama-2-7b-chat', 'llama-2-13b-chat', 'llama-2-70b-chat', or 'gpt-3.5-turbo'.\n\n\n*Note*: For standard zero-shot runs, pass 'skip-expander' as the 5th positional argument."
]
| [
"TAGS\n#size_categories-n<1K #language-English #arxiv-2310.03710 #region-us \n",
"### Generating Agent Instructions\n\n\nFirst, to generate the agent instructions for AddSub, run the following:\n\n\nWe'll create a configuration file that specifies the run configuration. As an example, we'll look at the configuration file , which specifies the configuration of running the AddSub dataset using GPT-3.5 Turbo:\n\n\nThe agent instructions for the AddSub dataset will be saved in . The agent's input, as well as the web sources used and intermediate prompts, will be saved under and respectively.",
"### Running Reasoning Steps\n\n\nWe'll use the same configuration file as above. To run reasoning steps with zero-shot AgentInstruct on AddSub, run the following:\n\n\nThe first two parameters are identical to those above, and the third represents the number of instances to run reasoning steps on. The results will be stored in .\n\n\n*Note*: By default, zero-shot AgentInstruct reasoning will be done using the latest set of instructions generated. To run reasoning with the instructions used in the paper, run this script before the run\\_reasoning command:",
"### Format Results\n\n\nTo easily format the evaluation results, run:\n\n\nThe evaluation results will be saved in . To see the full text output by instance, open and search for .\n\n\n*Note*: Normally, the results are formatted after all the run spec descriptions in the configuration file have been run. To see for a single run spec description, view:",
"### All Together Now\n\n\nTo run the above entire AgentInstruct pipeline in one go, run:\n\n\nThis will run all 3 steps outlined above, and store the result in .\n\n\nArguments\n---------\n\n\nIn this section, we'll cover various important run arguments.",
"### Run Configuration Arguments\n\n\nA run spec describes a specific dataset to run. For example, the run spec for AddSub used above is:\n\n\nargument: 'model', description: Model to use for inference., options: 'local/vicuna-13b' \n 'local/llama-2-7b-chat' \n 'local/llama-2-13b-chat' \n 'local/llama-2-70b-chat' \n 'openai/gpt-3.5-turbo-0301'\nargument: 'max\\_train\\_instances', description: Number of few shot examples to prepend. Few Shot is not recommended., options: int\nargument: 'instructions', description: Optional prompting method to use. 'None' corresponds to standard zeroshot., options: 'agentinstruct' \n 'zeroshotcot' \n 'None'\n\n\n*Note*: Several datasets have additional argument to specify the specific subset or task.",
"### Generating Agent Instructions Arguments\n\n\nThe main script to generate agent instructions is . It takes the following 2 positional arguments:\n\n\nargument: 1st, description: Path to run spec file., options: str\nargument: 2nd, description: Suite name under which to save instructions., options: str\n\n\nInternally, the agent instructions are generated by first running dataset preprocessing (in ) and then running the instruction generation (in ). These are combined in and called by . GPT-4 is used as the agent LLM as in our paper.",
"### Running Reasoning Arguments\n\n\nThe main script to run reasoning is , which internally calls 'helm-run'. It takes the following 4 positional arguments, as well as a placeholder for any additional argument to pass to 'helm-run':\n\n\nargument: 1st, description: Path to run spec file., options: str\nargument: 2nd, description: Suite name under which to save outputs., options: str\nargument: 3rd, description: Maximum number of instances to run., options: int\nargument: 4th, description: Maximum number of threads from which to send requests. Defaults to 8 for all models., options: int\nargument: 5th, description: Place holder for any additional argument to pass to 'helm-run'., options: str",
"### Outputting Results Arguments\n\n\nThe main script to format the results is . It takes a single named argument:\n\n\nargument: --suite, description: Suite name under which to find outputs., options: str\n\n\nReplicating Additional Results\n------------------------------\n\n\nTo replicate the zero-shot ('zeroshot') and zero-shot CoT ('zeroshot') modes, run:\n\n\nwhere '{mode}' is 'zeroshot' or 'zeroshotcot' and '{model}' is 'vicuna-13b', 'llama-2-7b-chat', 'llama-2-13b-chat', 'llama-2-70b-chat', or 'gpt-3.5-turbo'.\n\n\n*Note*: For standard zero-shot runs, pass 'skip-expander' as the 5th positional argument."
]
| [
29,
114,
129,
76,
59,
213,
122,
180,
185
]
| [
"passage: TAGS\n#size_categories-n<1K #language-English #arxiv-2310.03710 #region-us \n### Generating Agent Instructions\n\n\nFirst, to generate the agent instructions for AddSub, run the following:\n\n\nWe'll create a configuration file that specifies the run configuration. As an example, we'll look at the configuration file , which specifies the configuration of running the AddSub dataset using GPT-3.5 Turbo:\n\n\nThe agent instructions for the AddSub dataset will be saved in . The agent's input, as well as the web sources used and intermediate prompts, will be saved under and respectively.### Running Reasoning Steps\n\n\nWe'll use the same configuration file as above. To run reasoning steps with zero-shot AgentInstruct on AddSub, run the following:\n\n\nThe first two parameters are identical to those above, and the third represents the number of instances to run reasoning steps on. The results will be stored in .\n\n\n*Note*: By default, zero-shot AgentInstruct reasoning will be done using the latest set of instructions generated. To run reasoning with the instructions used in the paper, run this script before the run\\_reasoning command:### Format Results\n\n\nTo easily format the evaluation results, run:\n\n\nThe evaluation results will be saved in . To see the full text output by instance, open and search for .\n\n\n*Note*: Normally, the results are formatted after all the run spec descriptions in the configuration file have been run. To see for a single run spec description, view:### All Together Now\n\n\nTo run the above entire AgentInstruct pipeline in one go, run:\n\n\nThis will run all 3 steps outlined above, and store the result in .\n\n\nArguments\n---------\n\n\nIn this section, we'll cover various important run arguments."
]
|
bcc63c8e3ddf025fb1f4a6a120d05a710d92b2ae |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | wanhao/text_image | [
"task_categories:text-classification",
"task_categories:translation",
"size_categories:1K<n<10K",
"region:us"
]
| 2023-11-01T02:22:20+00:00 | {"size_categories": ["1K<n<10K"], "task_categories": ["text-classification", "translation"]} | 2023-11-01T02:26:45+00:00 | []
| []
| TAGS
#task_categories-text-classification #task_categories-translation #size_categories-1K<n<10K #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#task_categories-text-classification #task_categories-translation #size_categories-1K<n<10K #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
38,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-translation #size_categories-1K<n<10K #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
90549e6e526d6e334ca49bee167945c2d85d108b |
We design a benchmark called OceanBench to evaluate the capabilities of LLMs for oceanography tasks.
It includes a total of 15 ocean-related tasks such as question-answering, extraction,and description.
## 🛠️ How to use OceanGPT
We provide the example and you can modify the input according to your needs.
```python
from datasets import load_dataset
dataset = load_dataset("zjunlp/OceanBench")
``` | zjunlp/OceanBench | [
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"Ocean",
"region:us"
]
| 2023-11-01T02:53:40+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "pretty_name": "OceanBench", "tags": ["Ocean"]} | 2024-02-07T16:56:54+00:00 | []
| [
"en"
]
| TAGS
#size_categories-10K<n<100K #language-English #license-mit #Ocean #region-us
|
We design a benchmark called OceanBench to evaluate the capabilities of LLMs for oceanography tasks.
It includes a total of 15 ocean-related tasks such as question-answering, extraction,and description.
## ️ How to use OceanGPT
We provide the example and you can modify the input according to your needs.
| [
"## ️ How to use OceanGPT\nWe provide the example and you can modify the input according to your needs."
]
| [
"TAGS\n#size_categories-10K<n<100K #language-English #license-mit #Ocean #region-us \n",
"## ️ How to use OceanGPT\nWe provide the example and you can modify the input according to your needs."
]
| [
31,
25
]
| [
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-mit #Ocean #region-us \n## ️ How to use OceanGPT\nWe provide the example and you can modify the input according to your needs."
]
|
0987ea38eaa19b85b20dbfa120dc9638765f4bb4 | # Dataset Card for "history_book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tinhpx2911/history_book | [
"region:us"
]
| 2023-11-01T03:55:10+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74062733, "num_examples": 81}], "download_size": 37725495, "dataset_size": 74062733}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T03:55:39+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "history_book"
More Information needed | [
"# Dataset Card for \"history_book\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"history_book\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"history_book\"\n\nMore Information needed"
]
|
682c634b9910024b4d90cb18b84d13757cba0b4a | # Dataset Card for "dolphin_mqa_details"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nguyenthanhdo/dolphin_mqa_details | [
"region:us"
]
| 2023-11-01T04:02:48+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26369871.746988524, "num_examples": 15037}], "download_size": 10922205, "dataset_size": 26369871.746988524}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T04:08:11+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dolphin_mqa_details"
More Information needed | [
"# Dataset Card for \"dolphin_mqa_details\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dolphin_mqa_details\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dolphin_mqa_details\"\n\nMore Information needed"
]
|
4ab47a29be5eb5e5c0274a594f8bae33c52f372b | # Introduction
This dataset is a mirror of the GSM8K Test split. We have manually ensured the answers are correct. This dataset can serve as a reference to evaluate a model's ability to generalize to math problems.
# License Agreement
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
# Contact Us and Citation
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | Skywork/mock_gsm8k_test | [
"license:other",
"arxiv:2310.19341",
"region:us"
]
| 2023-11-01T04:15:50+00:00 | {"license": "other", "license_name": "license", "license_link": "https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf"} | 2023-11-02T03:48:52+00:00 | [
"2310.19341"
]
| []
| TAGS
#license-other #arxiv-2310.19341 #region-us
| # Introduction
This dataset is a mirror of the GSM8K Test split. We have manually ensured the answers are correct. This dataset can serve as a reference to evaluate a model's ability to generalize to math problems.
# License Agreement
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
# Contact Us and Citation
If you find our work helpful, please feel free to cite our paper~
| [
"# Introduction\nThis dataset is a mirror of the GSM8K Test split. We have manually ensured the answers are correct. This dataset can serve as a reference to evaluate a model's ability to generalize to math problems.",
"# License Agreement\nThe community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.",
"# Contact Us and Citation\nIf you find our work helpful, please feel free to cite our paper~"
]
| [
"TAGS\n#license-other #arxiv-2310.19341 #region-us \n",
"# Introduction\nThis dataset is a mirror of the GSM8K Test split. We have manually ensured the answers are correct. This dataset can serve as a reference to evaluate a model's ability to generalize to math problems.",
"# License Agreement\nThe community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.",
"# Contact Us and Citation\nIf you find our work helpful, please feel free to cite our paper~"
]
| [
21,
52,
67,
21
]
| [
"passage: TAGS\n#license-other #arxiv-2310.19341 #region-us \n# Introduction\nThis dataset is a mirror of the GSM8K Test split. We have manually ensured the answers are correct. This dataset can serve as a reference to evaluate a model's ability to generalize to math problems.# License Agreement\nThe community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.# Contact Us and Citation\nIf you find our work helpful, please feel free to cite our paper~"
]
|
a5c2f4ee98ad7836cbb5eb61588b2abcea36002a | # 数据介绍(Introduction)
Skywork/ChineseDomainModelingEval是中文领域建模能力评测数据集,我们对多个领域筛选出2023年9月份-2023年10月份新发布的几百到上千篇高质量文章,并人工进行了核对。测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的Perplexity,模型很难作弊。并且我们会持续按照最新数据评测各个模型效果,动态更新各个模型能力。
# 文件介绍(File Introduction)
- zh_finance.jsonl为金融领域评估数据
- zh_game.jsonl为游戏领域评估数据
- zh_government.jsonl为政务领域评估数据
- zh_movie.jsonl为电影领域评估数据
- zh_tech.jsonl为技术领域评估数据
- zh_general.jsonl为综合领域评估数据
# 协议(License Agreement)
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
# 引用(Contact Us and Citation)
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | Skywork/ChineseDomainModelingEval | [
"license:other",
"arxiv:2310.19341",
"region:us"
]
| 2023-11-01T04:35:36+00:00 | {"license": "other", "license_name": "license", "license_link": "https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf"} | 2023-11-02T03:51:43+00:00 | [
"2310.19341"
]
| []
| TAGS
#license-other #arxiv-2310.19341 #region-us
| # 数据介绍(Introduction)
Skywork/ChineseDomainModelingEval是中文领域建模能力评测数据集,我们对多个领域筛选出2023年9月份-2023年10月份新发布的几百到上千篇高质量文章,并人工进行了核对。测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的Perplexity,模型很难作弊。并且我们会持续按照最新数据评测各个模型效果,动态更新各个模型能力。
# 文件介绍(File Introduction)
- zh_finance.jsonl为金融领域评估数据
- zh_game.jsonl为游戏领域评估数据
- zh_government.jsonl为政务领域评估数据
- zh_movie.jsonl为电影领域评估数据
- zh_tech.jsonl为技术领域评估数据
- zh_general.jsonl为综合领域评估数据
# 协议(License Agreement)
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
# 引用(Contact Us and Citation)
If you find our work helpful, please feel free to cite our paper~
| [
"# 数据介绍(Introduction)\nSkywork/ChineseDomainModelingEval是中文领域建模能力评测数据集,我们对多个领域筛选出2023年9月份-2023年10月份新发布的几百到上千篇高质量文章,并人工进行了核对。测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的Perplexity,模型很难作弊。并且我们会持续按照最新数据评测各个模型效果,动态更新各个模型能力。",
"# 文件介绍(File Introduction)\n\n- zh_finance.jsonl为金融领域评估数据\n- zh_game.jsonl为游戏领域评估数据\n- zh_government.jsonl为政务领域评估数据\n- zh_movie.jsonl为电影领域评估数据\n- zh_tech.jsonl为技术领域评估数据\n- zh_general.jsonl为综合领域评估数据",
"# 协议(License Agreement)\nThe community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.",
"# 引用(Contact Us and Citation)\nIf you find our work helpful, please feel free to cite our paper~"
]
| [
"TAGS\n#license-other #arxiv-2310.19341 #region-us \n",
"# 数据介绍(Introduction)\nSkywork/ChineseDomainModelingEval是中文领域建模能力评测数据集,我们对多个领域筛选出2023年9月份-2023年10月份新发布的几百到上千篇高质量文章,并人工进行了核对。测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的Perplexity,模型很难作弊。并且我们会持续按照最新数据评测各个模型效果,动态更新各个模型能力。",
"# 文件介绍(File Introduction)\n\n- zh_finance.jsonl为金融领域评估数据\n- zh_game.jsonl为游戏领域评估数据\n- zh_government.jsonl为政务领域评估数据\n- zh_movie.jsonl为电影领域评估数据\n- zh_tech.jsonl为技术领域评估数据\n- zh_general.jsonl为综合领域评估数据",
"# 协议(License Agreement)\nThe community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.",
"# 引用(Contact Us and Citation)\nIf you find our work helpful, please feel free to cite our paper~"
]
| [
21,
115,
95,
72,
25
]
| [
"passage: TAGS\n#license-other #arxiv-2310.19341 #region-us \n# 数据介绍(Introduction)\nSkywork/ChineseDomainModelingEval是中文领域建模能力评测数据集,我们对多个领域筛选出2023年9月份-2023年10月份新发布的几百到上千篇高质量文章,并人工进行了核对。测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的Perplexity,模型很难作弊。并且我们会持续按照最新数据评测各个模型效果,动态更新各个模型能力。# 文件介绍(File Introduction)\n\n- zh_finance.jsonl为金融领域评估数据\n- zh_game.jsonl为游戏领域评估数据\n- zh_government.jsonl为政务领域评估数据\n- zh_movie.jsonl为电影领域评估数据\n- zh_tech.jsonl为技术领域评估数据\n- zh_general.jsonl为综合领域评估数据# 协议(License Agreement)\nThe community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.# 引用(Contact Us and Citation)\nIf you find our work helpful, please feel free to cite our paper~"
]
|
d288a59ab73fec3725ca9ee96065867af4f849e9 | # Dataset Card for "empty_function_jupyter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liyongsea/empty_function_jupyter | [
"region:us"
]
| 2023-11-01T05:38:41+00:00 | {"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "content_id", "dtype": "string"}, {"name": "detected_licenses", "sequence": "string"}, {"name": "license_type", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "repo_url", "dtype": "string"}, {"name": "star_events_count", "dtype": "int64"}, {"name": "fork_events_count", "dtype": "int64"}, {"name": "gha_license_id", "dtype": "string"}, {"name": "gha_event_created_at", "dtype": "timestamp[us]"}, {"name": "gha_updated_at", "dtype": "timestamp[us]"}, {"name": "gha_language", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "is_generated", "dtype": "bool"}, {"name": "is_vendor", "dtype": "bool"}, {"name": "conversion_extension", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "script", "dtype": "string"}, {"name": "script_size", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 654648.6506, "num_examples": 28}], "download_size": 292451, "dataset_size": 654648.6506}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T05:38:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "empty_function_jupyter"
More Information needed | [
"# Dataset Card for \"empty_function_jupyter\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"empty_function_jupyter\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"empty_function_jupyter\"\n\nMore Information needed"
]
|
28a9e6d670a089f22c1f148d0359d3e763bdff56 | # Dataset Card for "water_effects_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/water_effects_prompts | [
"region:us"
]
| 2023-11-01T06:05:10+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 165337, "num_examples": 1000}], "download_size": 12533, "dataset_size": 165337}} | 2023-11-01T06:05:12+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "water_effects_prompts"
More Information needed | [
"# Dataset Card for \"water_effects_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"water_effects_prompts\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"water_effects_prompts\"\n\nMore Information needed"
]
|
d7cdae96092969fc951f9f9c75f92b441f4749c8 | # Dataset Card for "succulents_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/succulents_prompts | [
"region:us"
]
| 2023-11-01T06:08:02+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 99910, "num_examples": 1000}], "download_size": 1813, "dataset_size": 99910}} | 2023-11-01T06:08:03+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "succulents_prompts"
More Information needed | [
"# Dataset Card for \"succulents_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"succulents_prompts\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"succulents_prompts\"\n\nMore Information needed"
]
|
f299155bdca0d9c1f4386f8af4b861c4c8ed66b2 | # Dataset Card for "wooden_objects_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/wooden_objects_prompts | [
"region:us"
]
| 2023-11-01T06:11:03+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 94232, "num_examples": 1000}], "download_size": 1631, "dataset_size": 94232}} | 2023-11-01T06:11:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "wooden_objects_prompts"
More Information needed | [
"# Dataset Card for \"wooden_objects_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"wooden_objects_prompts\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"wooden_objects_prompts\"\n\nMore Information needed"
]
|
920912e1a0938a4fc0d58e599733f2df6ac263d0 | # Dataset Card for "movie_poster_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/movie_poster_prompts | [
"region:us"
]
| 2023-11-01T06:14:16+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35799, "num_examples": 1000}], "download_size": 1361, "dataset_size": 35799}} | 2023-11-01T06:14:18+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "movie_poster_prompts"
More Information needed | [
"# Dataset Card for \"movie_poster_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"movie_poster_prompts\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"movie_poster_prompts\"\n\nMore Information needed"
]
|
c3e9823d36dc1b2a8bd2686b845fa43404428779 | # Dataset Card for "dataset-farma-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marziye-A/dataset-farma-test | [
"region:us"
]
| 2023-11-01T06:14:20+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74288845.504, "num_examples": 2006}], "download_size": 72536013, "dataset_size": 74288845.504}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T06:54:44+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset-farma-test"
More Information needed | [
"# Dataset Card for \"dataset-farma-test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-farma-test\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset-farma-test\"\n\nMore Information needed"
]
|
696d327d44ef4f1e86dca15b5fab9a9e0ac4714c | # Dataset Card for "app_icon_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/app_icon_prompts | [
"region:us"
]
| 2023-11-01T06:18:00+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119295, "num_examples": 1000}], "download_size": 1739, "dataset_size": 119295}} | 2023-11-01T06:18:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "app_icon_prompts"
More Information needed | [
"# Dataset Card for \"app_icon_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"app_icon_prompts\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"app_icon_prompts\"\n\nMore Information needed"
]
|
e77c4e20a4034f6b05d657efa1fa1476be7596da | # Dataset Card for "app_mockup_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/app_mockup_prompts | [
"region:us"
]
| 2023-11-01T06:21:15+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 118978, "num_examples": 1000}], "download_size": 4975, "dataset_size": 118978}} | 2023-11-01T06:21:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "app_mockup_prompts"
More Information needed | [
"# Dataset Card for \"app_mockup_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"app_mockup_prompts\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"app_mockup_prompts\"\n\nMore Information needed"
]
|
33ea1af8775b41f3cfdb1df44917a05b87d0ebb5 | # Dataset Card for "ghibli_stills_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/ghibli_stills_prompts | [
"region:us"
]
| 2023-11-01T06:25:29+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 174193, "num_examples": 1000}], "download_size": 18160, "dataset_size": 174193}} | 2023-11-01T06:25:31+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "ghibli_stills_prompts"
More Information needed | [
"# Dataset Card for \"ghibli_stills_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"ghibli_stills_prompts\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"ghibli_stills_prompts\"\n\nMore Information needed"
]
|
599363f5ae1714a62f1c7969a5c6f21753247976 | # Dataset Card for "sneaker_concepts_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/sneaker_concepts_prompts | [
"region:us"
]
| 2023-11-01T06:33:31+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 130560, "num_examples": 1000}], "download_size": 23580, "dataset_size": 130560}} | 2023-11-01T06:33:32+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "sneaker_concepts_prompts"
More Information needed | [
"# Dataset Card for \"sneaker_concepts_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"sneaker_concepts_prompts\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"sneaker_concepts_prompts\"\n\nMore Information needed"
]
|
6a541fc0254cca6c1fab73c8dc82a41354c23c65 | # Dataset Card for "shEMO_speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | minoosh/shEMO_speech | [
"region:us"
]
| 2023-11-01T06:34:38+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "emotion", "dtype": {"class_label": {"names": {"0": "A", "1": "H", "2": "N", "3": "S", "4": "W", "5": "F"}}}}], "splits": [{"name": "train", "num_bytes": 856321868.0, "num_examples": 2400}, {"name": "test", "num_bytes": 100721512.0, "num_examples": 300}, {"name": "valid", "num_bytes": 105982082.0, "num_examples": 300}], "download_size": 1043899986, "dataset_size": 1063025462.0}} | 2023-11-01T06:35:49+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "shEMO_speech"
More Information needed | [
"# Dataset Card for \"shEMO_speech\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"shEMO_speech\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"shEMO_speech\"\n\nMore Information needed"
]
|
0c70eec0185f66f9ace82ce8aab2877094311fb4 | # Dataset Card for "fashion_moodboards_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Falah/fashion_moodboards_prompts | [
"region:us"
]
| 2023-11-01T06:36:25+00:00 | {"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 141480, "num_examples": 1000}], "download_size": 22359, "dataset_size": 141480}} | 2023-11-19T08:59:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fashion_moodboards_prompts"
More Information needed | [
"# Dataset Card for \"fashion_moodboards_prompts\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fashion_moodboards_prompts\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fashion_moodboards_prompts\"\n\nMore Information needed"
]
|
add7c8facceec608bb465588e79fb86c7775ae6c | USAGE in Python
# load train and valid dataset
```
```
# add base_folder
```
```
| gear42/Nuscenes-QA-merge-front-image | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"region:us"
]
| 2023-11-01T06:50:44+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"]} | 2023-11-09T03:55:39+00:00 | []
| [
"en"
]
| TAGS
#task_categories-conversational #size_categories-10K<n<100K #language-English #region-us
| USAGE in Python
# load train and valid dataset
# add base_folder
| [
"# load train and valid dataset",
"# add base_folder"
]
| [
"TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #region-us \n",
"# load train and valid dataset",
"# add base_folder"
]
| [
32,
7,
6
]
| [
"passage: TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #region-us \n# load train and valid dataset# add base_folder"
]
|
c271f41ab6b0a8ff497810a80cc57a0e9db6521a |
Dataset for “One Consensus, Diverse Expressions:Ethical Spectrum Analysis of the 'Carbon' Issue in the Global News Database” (submitted to ICA 2024) | school-knight/MFT_NEWS | [
"license:apache-2.0",
"region:us"
]
| 2023-11-01T07:10:05+00:00 | {"license": "apache-2.0"} | 2023-11-01T08:00:35+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
|
Dataset for “One Consensus, Diverse Expressions:Ethical Spectrum Analysis of the 'Carbon' Issue in the Global News Database” (submitted to ICA 2024) | []
| [
"TAGS\n#license-apache-2.0 #region-us \n"
]
| [
14
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
]
|
fac5a975a687cb50298e26602bf3e54813414047 |
# Dataset Card for QASC
## Licensing Information
The data is distributed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
## Source Data Citation INformation
```
@inproceedings{snli:emnlp2015,
Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
Publisher = {Association for Computational Linguistics},
Title = {A large annotated corpus for learning natural language inference},
Year = {2015}
}
``` | KETI-AIR/kor_snli | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"size_categories:100K<n<1M",
"language:ko",
"license:cc-by-4.0",
"region:us"
]
| 2023-11-01T07:29:14+00:00 | {"language": ["ko"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "multi-input-text-classification"], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "data_index_by_user", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 85943643, "num_examples": 550152}, {"name": "validation", "num_bytes": 1631544, "num_examples": 10000}, {"name": "test", "num_bytes": 1638084, "num_examples": 10000}], "download_size": 27268480, "dataset_size": 89213271}} | 2023-11-15T01:12:23+00:00 | []
| [
"ko"
]
| TAGS
#task_categories-text-classification #task_ids-natural-language-inference #task_ids-multi-input-text-classification #size_categories-100K<n<1M #language-Korean #license-cc-by-4.0 #region-us
|
# Dataset Card for QASC
## Licensing Information
The data is distributed under the CC BY 4.0 license.
## Source Data Citation INformation
| [
"# Dataset Card for QASC",
"## Licensing Information\n\nThe data is distributed under the CC BY 4.0 license.",
"## Source Data Citation INformation"
]
| [
"TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-multi-input-text-classification #size_categories-100K<n<1M #language-Korean #license-cc-by-4.0 #region-us \n",
"# Dataset Card for QASC",
"## Licensing Information\n\nThe data is distributed under the CC BY 4.0 license.",
"## Source Data Citation INformation"
]
| [
71,
8,
17,
8
]
| [
"passage: TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-multi-input-text-classification #size_categories-100K<n<1M #language-Korean #license-cc-by-4.0 #region-us \n# Dataset Card for QASC## Licensing Information\n\nThe data is distributed under the CC BY 4.0 license.## Source Data Citation INformation"
]
|
5c46364597939a8a576e51c913ae4aba553735f7 | # Dataset Card for "qrecc_conversational_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | explodinggradients/qrecc_conversational_embeddings | [
"region:us"
]
| 2023-11-01T07:30:35+00:00 | {"dataset_info": {"features": [{"name": "Answer", "dtype": "string"}, {"name": "Hard_negatives", "sequence": "string"}, {"name": "Question", "dtype": "string"}, {"name": "Turn_no", "dtype": "int64"}, {"name": "Context", "sequence": "string"}, {"name": "Negatives", "sequence": "string"}, {"name": "Conversation_no", "dtype": "int64"}, {"name": "Positives", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 7094988, "num_examples": 1153}], "download_size": 1222882, "dataset_size": 7094988}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T07:30:40+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "qrecc_conversational_embeddings"
More Information needed | [
"# Dataset Card for \"qrecc_conversational_embeddings\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"qrecc_conversational_embeddings\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"qrecc_conversational_embeddings\"\n\nMore Information needed"
]
|
f28e86af001c22bed8917fc3945591316fc8a16b | # Dataset Card for "Ultrachat-Filtered-Multiple-Conversations-Alpaca-Style"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | health360/Ultrachat-Filtered-Multiple-Conversations-Alpaca-Style | [
"region:us"
]
| 2023-11-01T07:37:43+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1201812388, "num_examples": 207865}], "download_size": 0, "dataset_size": 1201812388}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T07:41:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Ultrachat-Filtered-Multiple-Conversations-Alpaca-Style"
More Information needed | [
"# Dataset Card for \"Ultrachat-Filtered-Multiple-Conversations-Alpaca-Style\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Ultrachat-Filtered-Multiple-Conversations-Alpaca-Style\"\n\nMore Information needed"
]
| [
6,
31
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Ultrachat-Filtered-Multiple-Conversations-Alpaca-Style\"\n\nMore Information needed"
]
|
abc23cb5c9e873350cb790b318de215a51579347 | # Dataset Card for "Ultrachat-Filtered-Multiple-Conversations-Alpaca-Tinyllama-Tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | health360/Ultrachat-Filtered-Multiple-Conversations-Alpaca-Tinyllama-Tokenized | [
"region:us"
]
| 2023-11-01T07:45:08+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 3332012908, "num_examples": 207865}], "download_size": 1088335043, "dataset_size": 3332012908}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T07:51:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "Ultrachat-Filtered-Multiple-Conversations-Alpaca-Tinyllama-Tokenized"
More Information needed | [
"# Dataset Card for \"Ultrachat-Filtered-Multiple-Conversations-Alpaca-Tinyllama-Tokenized\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"Ultrachat-Filtered-Multiple-Conversations-Alpaca-Tinyllama-Tokenized\"\n\nMore Information needed"
]
| [
6,
38
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"Ultrachat-Filtered-Multiple-Conversations-Alpaca-Tinyllama-Tokenized\"\n\nMore Information needed"
]
|
c3c4a7c9a4e17c8ceb8bdd05c9677104596aca29 | # Dataset Card for "dataset-farma-test2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marziye-A/dataset-farma-test2 | [
"region:us"
]
| 2023-11-01T07:47:57+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74308914.36, "num_examples": 2005}], "download_size": 72537312, "dataset_size": 74308914.36}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T08:05:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset-farma-test2"
More Information needed | [
"# Dataset Card for \"dataset-farma-test2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-farma-test2\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset-farma-test2\"\n\nMore Information needed"
]
|
d6f939386443ae3799a55edca8d6f31ddd28ef7d |
生成AIの日英専門用語集です。正確さは保証しませんが、GPT-4などの頭に入れておくと綺麗に訳せると思います。 | alfredplpl/genai-terminology-en-ja | [
"size_categories:n<1K",
"language:en",
"language:ja",
"license:apache-2.0",
"region:us"
]
| 2023-11-01T08:01:31+00:00 | {"language": ["en", "ja"], "license": "apache-2.0", "size_categories": ["n<1K"]} | 2023-11-01T08:05:56+00:00 | []
| [
"en",
"ja"
]
| TAGS
#size_categories-n<1K #language-English #language-Japanese #license-apache-2.0 #region-us
|
生成AIの日英専門用語集です。正確さは保証しませんが、GPT-4などの頭に入れておくと綺麗に訳せると思います。 | []
| [
"TAGS\n#size_categories-n<1K #language-English #language-Japanese #license-apache-2.0 #region-us \n"
]
| [
34
]
| [
"passage: TAGS\n#size_categories-n<1K #language-English #language-Japanese #license-apache-2.0 #region-us \n"
]
|
e1cf95d395a551a1eb9812d65fab01a4a04decd8 | # Dataset Card for "forest-damage"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | danielz01/forest-damage | [
"region:us"
]
| 2023-11-01T08:12:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "path", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "depth", "dtype": "int64"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "categories", "sequence": "string"}, {"name": "damage", "sequence": "string"}]}, {"name": "objects_meta", "struct": [{"name": "difficult", "sequence": "int64"}, {"name": "pose", "sequence": "string"}, {"name": "truncated", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 3527542816.427, "num_examples": 1537}], "download_size": 3539168325, "dataset_size": 3527542816.427}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T08:37:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "forest-damage"
More Information needed | [
"# Dataset Card for \"forest-damage\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"forest-damage\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"forest-damage\"\n\nMore Information needed"
]
|
d7c95c936bc4e938e9db192827b4fb81c568c7d6 | # Chat Fine-tuning Dataset - OpenAssistant Falcon
This dataset allows for fine-tuning chat models using '\Human:' AND '\nAssistant:' to wrap user messages.
It still uses <|endoftext|> as EOS and BOS token, as per Falcon.
Sample
Preparation:
1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then filtered to:
- replace instances of '### Human:' with '\nHuman:'
- replace instances of '### Assistant:' with '\nAssistant:'
- end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
"message_id": "218440fd-5317-4355-91dc-d001416df62b",
"parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
"user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
"text": "It was the winter of 2035, and artificial intelligence (..)",
"role": "assistant",
"lang": "en",
"review_count": 3,
"review_result": true,
"deleted": false,
"rank": 0,
"synthetic": true,
"model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
"labels": {
"spam": { "value": 0.0, "count": 3 },
"lang_mismatch": { "value": 0.0, "count": 3 },
"pii": { "value": 0.0, "count": 3 },
"not_appropriate": { "value": 0.0, "count": 3 },
"hate_speech": { "value": 0.0, "count": 3 },
"sexual_content": { "value": 0.0, "count": 3 },
"quality": { "value": 0.416, "count": 3 },
"toxicity": { "value": 0.16, "count": 3 },
"humor": { "value": 0.0, "count": 3 },
"creativity": { "value": 0.33, "count": 3 },
"violence": { "value": 0.16, "count": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
"message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"text": "Why can't we divide by 0? (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
"text": "The reason we cannot divide by zero is because (..)",
"role": "assistant",
"lang": "en",
"replies": [
// ...
]
},
{
"message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
"text": "The reason that the result of a division by zero is (..)",
"role": "assistant",
"lang": "en",
"replies": [
{
"message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
"text": "Math is confusing. Like those weird Irrational (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
"text": "Irrational numbers are simply numbers (..)",
"role": "assistant",
"lang": "en",
"replies": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`"review_result": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst1")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [[email protected]](mailto:[email protected]) | Trelis/openassistant-falcon | [
"size_categories:1K<n<10k",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",
"language:nl",
"language:hu",
"language:eu",
"language:zh",
"language:eo",
"language:ja",
"language:ca",
"language:cs",
"language:bg",
"language:fi",
"language:pt",
"language:tr",
"language:ro",
"language:ar",
"language:uk",
"language:gl",
"language:fr",
"language:ko",
"license:apache-2.0",
"human-feedback",
"llama-2",
"arxiv:2304.07327",
"region:us"
]
| 2023-11-01T08:38:05+00:00 | {"language": ["en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko"], "license": "apache-2.0", "size_categories": ["1K<n<10k"], "pretty_name": "Filtered OpenAssistant Conversations", "tags": ["human-feedback", "llama-2"]} | 2023-11-01T08:46:17+00:00 | [
"2304.07327"
]
| [
"en",
"es",
"ru",
"de",
"pl",
"th",
"vi",
"sv",
"bn",
"da",
"he",
"it",
"fa",
"sk",
"id",
"nb",
"el",
"nl",
"hu",
"eu",
"zh",
"eo",
"ja",
"ca",
"cs",
"bg",
"fi",
"pt",
"tr",
"ro",
"ar",
"uk",
"gl",
"fr",
"ko"
]
| TAGS
#size_categories-1K<n<10k #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #llama-2 #arxiv-2304.07327 #region-us
| # Chat Fine-tuning Dataset - OpenAssistant Falcon
This dataset allows for fine-tuning chat models using '\Human:' AND '\nAssistant:' to wrap user messages.
It still uses <|endoftext|> as EOS and BOS token, as per Falcon.
Sample
Preparation:
1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then filtered to:
- replace instances of '### Human:' with '\nHuman:'
- replace instances of '### Assistant:' with '\nAssistant:'
- end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our paper for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the URL website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
Please refer to oasst-data for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
'getting-started'
notebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension '.URL')
or as a flat list (table) of messages (extension '.URL').
### Ready For Export Trees
Trees in 'ready_for_export' state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
All trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt),
'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'.
### Supplemental Exports: Spam & Prompts
These are messages which were deleted or have a negative review result ('"review_result": false').
Besides low quality, a frequent reason for message deletion is a wrong language tag.
These are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits.
These are directly loadable by Huggingface Datasets.
To load the oasst1 train & validation splits use:
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the 'parent_id'
and 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id'
and 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
Languages with over 1000 messages
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord Open Assistant Discord Server
- GitHub: LAION-AI/Open-Assistant
- E-Mail: open-assistant@URL | [
"# Chat Fine-tuning Dataset - OpenAssistant Falcon\nThis dataset allows for fine-tuning chat models using '\\Human:' AND '\\nAssistant:' to wrap user messages.\n\nIt still uses <|endoftext|> as EOS and BOS token, as per Falcon.\n\nSample \n\nPreparation:\n\n1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.\n1. The dataset was then filtered to:\n - replace instances of '### Human:' with '\\nHuman:'\n - replace instances of '### Assistant:' with '\\nAssistant:'\n - end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).\n\nDetails of the root dataset follow, copied from that repo:",
"# OpenAssistant Conversations Dataset (OASST1)",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"### Dataset Summary\n\nIn an effort to democratize research on large-scale alignment, we release OpenAssistant \nConversations (OASST1), a human-generated, human-annotated assistant-style conversation \ncorpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 \nquality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus \nis a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nPlease refer to our paper for further details.",
"### Dataset Structure\n\nThis dataset contains message trees. Each message tree has an initial prompt message as the root node, \nwhich can have multiple child messages as replies, and these child messages can have multiple replies. \n\nAll messages have a role property: this can either be \"assistant\" or \"prompter\". The roles in \nconversation threads from prompt to leaf node strictly alternate between \"prompter\" and \"assistant\".\n\nThis version of the dataset contains data collected on the URL website until April 12 2023.",
"### JSON Example: Message\n\nFor readability, the following JSON examples are shown formatted with indentation on multiple lines.\nObjects are stored without indentation (on single lines) in the actual jsonl files.",
"### JSON Example: Conversation Tree\n\nFor readability, only a subset of the message properties is shown here.\n\n\n\nPlease refer to oasst-data for\ndetails about the data structure and Python code to read and write jsonl files containing oasst data objects.\n\nIf you would like to explore the dataset yourself you can find a \n'getting-started' \nnotebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant\ngithub repository.",
"## Main Dataset Files\n\nConversation data is provided either as nested messages in trees (extension '.URL') \nor as a flat list (table) of messages (extension '.URL').",
"### Ready For Export Trees\n\n\nTrees in 'ready_for_export' state without spam and deleted messages including message labels.\nThe oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.",
"### All Trees\n\nAll trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt),\n'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'.",
"### Supplemental Exports: Spam & Prompts\n\nThese are messages which were deleted or have a negative review result ('\"review_result\": false').\nBesides low quality, a frequent reason for message deletion is a wrong language tag.\n\n\nThese are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state.",
"### Using the Huggingface Datasets\n\nWhile HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.\nNevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits. \nThese are directly loadable by Huggingface Datasets.\n\nTo load the oasst1 train & validation splits use:\n\n\n\nThe messages appear in depth-first order of the message trees.\n\nFull conversation trees can be reconstructed from the flat messages table by using the 'parent_id' \nand 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id' \nand 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.",
"### Languages\n\nOpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:\n\nLanguages with over 1000 messages\n- English: 71956\n- Spanish: 43061\n- Russian: 9089\n- German: 5279\n- Chinese: 4962\n- French: 4251\n- Thai: 3042\n- Portuguese (Brazil): 2969\n- Catalan: 2260\n- Korean: 1553\n- Ukrainian: 1352\n- Italian: 1320\n- Japanese: 1018\n\n<details>\n <summary><b>Languages with under 1000 messages</b></summary>\n <ul>\n <li>Vietnamese: 952</li>\n <li>Basque: 947</li>\n <li>Polish: 886</li>\n <li>Hungarian: 811</li>\n <li>Arabic: 666</li>\n <li>Dutch: 628</li>\n <li>Swedish: 512</li>\n <li>Turkish: 454</li>\n <li>Finnish: 386</li>\n <li>Czech: 372</li>\n <li>Danish: 358</li>\n <li>Galician: 339</li>\n <li>Hebrew: 255</li>\n <li>Romanian: 200</li>\n <li>Norwegian Bokmål: 133</li>\n <li>Indonesian: 115</li>\n <li>Bulgarian: 95</li>\n <li>Bengali: 82</li>\n <li>Persian: 72</li>\n <li>Greek: 66</li>\n <li>Esperanto: 59</li>\n <li>Slovak: 19</li>\n </ul>\n</details>",
"## Contact\n\n- Discord Open Assistant Discord Server\n- GitHub: LAION-AI/Open-Assistant\n- E-Mail: open-assistant@URL"
]
| [
"TAGS\n#size_categories-1K<n<10k #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #llama-2 #arxiv-2304.07327 #region-us \n",
"# Chat Fine-tuning Dataset - OpenAssistant Falcon\nThis dataset allows for fine-tuning chat models using '\\Human:' AND '\\nAssistant:' to wrap user messages.\n\nIt still uses <|endoftext|> as EOS and BOS token, as per Falcon.\n\nSample \n\nPreparation:\n\n1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.\n1. The dataset was then filtered to:\n - replace instances of '### Human:' with '\\nHuman:'\n - replace instances of '### Assistant:' with '\\nAssistant:'\n - end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).\n\nDetails of the root dataset follow, copied from that repo:",
"# OpenAssistant Conversations Dataset (OASST1)",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"### Dataset Summary\n\nIn an effort to democratize research on large-scale alignment, we release OpenAssistant \nConversations (OASST1), a human-generated, human-annotated assistant-style conversation \ncorpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 \nquality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus \nis a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nPlease refer to our paper for further details.",
"### Dataset Structure\n\nThis dataset contains message trees. Each message tree has an initial prompt message as the root node, \nwhich can have multiple child messages as replies, and these child messages can have multiple replies. \n\nAll messages have a role property: this can either be \"assistant\" or \"prompter\". The roles in \nconversation threads from prompt to leaf node strictly alternate between \"prompter\" and \"assistant\".\n\nThis version of the dataset contains data collected on the URL website until April 12 2023.",
"### JSON Example: Message\n\nFor readability, the following JSON examples are shown formatted with indentation on multiple lines.\nObjects are stored without indentation (on single lines) in the actual jsonl files.",
"### JSON Example: Conversation Tree\n\nFor readability, only a subset of the message properties is shown here.\n\n\n\nPlease refer to oasst-data for\ndetails about the data structure and Python code to read and write jsonl files containing oasst data objects.\n\nIf you would like to explore the dataset yourself you can find a \n'getting-started' \nnotebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant\ngithub repository.",
"## Main Dataset Files\n\nConversation data is provided either as nested messages in trees (extension '.URL') \nor as a flat list (table) of messages (extension '.URL').",
"### Ready For Export Trees\n\n\nTrees in 'ready_for_export' state without spam and deleted messages including message labels.\nThe oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.",
"### All Trees\n\nAll trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt),\n'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'.",
"### Supplemental Exports: Spam & Prompts\n\nThese are messages which were deleted or have a negative review result ('\"review_result\": false').\nBesides low quality, a frequent reason for message deletion is a wrong language tag.\n\n\nThese are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state.",
"### Using the Huggingface Datasets\n\nWhile HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.\nNevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits. \nThese are directly loadable by Huggingface Datasets.\n\nTo load the oasst1 train & validation splits use:\n\n\n\nThe messages appear in depth-first order of the message trees.\n\nFull conversation trees can be reconstructed from the flat messages table by using the 'parent_id' \nand 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id' \nand 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.",
"### Languages\n\nOpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:\n\nLanguages with over 1000 messages\n- English: 71956\n- Spanish: 43061\n- Russian: 9089\n- German: 5279\n- Chinese: 4962\n- French: 4251\n- Thai: 3042\n- Portuguese (Brazil): 2969\n- Catalan: 2260\n- Korean: 1553\n- Ukrainian: 1352\n- Italian: 1320\n- Japanese: 1018\n\n<details>\n <summary><b>Languages with under 1000 messages</b></summary>\n <ul>\n <li>Vietnamese: 952</li>\n <li>Basque: 947</li>\n <li>Polish: 886</li>\n <li>Hungarian: 811</li>\n <li>Arabic: 666</li>\n <li>Dutch: 628</li>\n <li>Swedish: 512</li>\n <li>Turkish: 454</li>\n <li>Finnish: 386</li>\n <li>Czech: 372</li>\n <li>Danish: 358</li>\n <li>Galician: 339</li>\n <li>Hebrew: 255</li>\n <li>Romanian: 200</li>\n <li>Norwegian Bokmål: 133</li>\n <li>Indonesian: 115</li>\n <li>Bulgarian: 95</li>\n <li>Bengali: 82</li>\n <li>Persian: 72</li>\n <li>Greek: 66</li>\n <li>Esperanto: 59</li>\n <li>Slovak: 19</li>\n </ul>\n</details>",
"## Contact\n\n- Discord Open Assistant Discord Server\n- GitHub: LAION-AI/Open-Assistant\n- E-Mail: open-assistant@URL"
]
| [
239,
228,
15,
18,
120,
120,
51,
117,
46,
66,
74,
99,
221,
381,
36
]
| [
"passage: TAGS\n#size_categories-1K<n<10k #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #llama-2 #arxiv-2304.07327 #region-us \n# Chat Fine-tuning Dataset - OpenAssistant Falcon\nThis dataset allows for fine-tuning chat models using '\\Human:' AND '\\nAssistant:' to wrap user messages.\n\nIt still uses <|endoftext|> as EOS and BOS token, as per Falcon.\n\nSample \n\nPreparation:\n\n1. The dataset is cloned from TimDettmers, which itself is a subset of the Open Assistant dataset, which you can find here. This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.\n1. The dataset was then filtered to:\n - replace instances of '### Human:' with '\\nHuman:'\n - replace instances of '### Assistant:' with '\\nAssistant:'\n - end assistant responses with <|endoftext|> (to encourage the model to emit <|endoftext|> when finished a response).\n\nDetails of the root dataset follow, copied from that repo:# OpenAssistant Conversations Dataset (OASST1)## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"passage: ### Dataset Summary\n\nIn an effort to democratize research on large-scale alignment, we release OpenAssistant \nConversations (OASST1), a human-generated, human-annotated assistant-style conversation \ncorpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 \nquality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus \nis a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.\n\nPlease refer to our paper for further details.### Dataset Structure\n\nThis dataset contains message trees. Each message tree has an initial prompt message as the root node, \nwhich can have multiple child messages as replies, and these child messages can have multiple replies. \n\nAll messages have a role property: this can either be \"assistant\" or \"prompter\". The roles in \nconversation threads from prompt to leaf node strictly alternate between \"prompter\" and \"assistant\".\n\nThis version of the dataset contains data collected on the URL website until April 12 2023.### JSON Example: Message\n\nFor readability, the following JSON examples are shown formatted with indentation on multiple lines.\nObjects are stored without indentation (on single lines) in the actual jsonl files.### JSON Example: Conversation Tree\n\nFor readability, only a subset of the message properties is shown here.\n\n\n\nPlease refer to oasst-data for\ndetails about the data structure and Python code to read and write jsonl files containing oasst data objects.\n\nIf you would like to explore the dataset yourself you can find a \n'getting-started' \nnotebook in the 'notebooks/openassistant-oasst1' folder of the LAION-AI/Open-Assistant\ngithub repository.## Main Dataset Files\n\nConversation data is provided either as nested messages in trees (extension '.URL') \nor as a flat list (table) of messages (extension '.URL').### Ready For Export Trees\n\n\nTrees in 'ready_for_export' state without spam and deleted messages including message labels.\nThe oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.### All Trees\n\nAll trees, including those in states 'prompt_lottery_waiting' (trees that consist of only one message, namely the initial prompt),\n'aborted_low_grade' (trees that stopped growing because the messages had low quality), and 'halted_by_moderator'.",
"passage: ### Supplemental Exports: Spam & Prompts\n\nThese are messages which were deleted or have a negative review result ('\"review_result\": false').\nBesides low quality, a frequent reason for message deletion is a wrong language tag.\n\n\nThese are all the kept initial prompt messages with positive review result (no spam) of trees in 'ready_for_export' or 'prompt_lottery_waiting' state.### Using the Huggingface Datasets\n\nWhile HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.\nNevertheless, we make all messages which can also be found in the file '2023-04-12_oasst_ready.URL' available in parquet as train/validation splits. \nThese are directly loadable by Huggingface Datasets.\n\nTo load the oasst1 train & validation splits use:\n\n\n\nThe messages appear in depth-first order of the message trees.\n\nFull conversation trees can be reconstructed from the flat messages table by using the 'parent_id' \nand 'message_id' properties to identify the parent-child relationship of messages. The 'message_tree_id' \nand 'tree_state' properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state."
]
|
c155836fff455d659374aa7ca3586be897e5d698 | # Dataset Card for "github_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ujan/github_classification | [
"region:us"
]
| 2023-11-01T08:43:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "names", "dtype": "string"}, {"name": "readmes", "dtype": "string"}, {"name": "topics", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51303107.05622984, "num_examples": 10414}, {"name": "validation", "num_bytes": 6414119.971885082, "num_examples": 1302}, {"name": "test", "num_bytes": 6414119.971885082, "num_examples": 1302}], "download_size": 29047991, "dataset_size": 64131347.00000001}} | 2023-11-01T08:43:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "github_classification"
More Information needed | [
"# Dataset Card for \"github_classification\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"github_classification\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"github_classification\"\n\nMore Information needed"
]
|
0b9c1d620b62e55fde6552bb0e7dc40a2e7268d4 | # Dataset Card for "minetest-screenshots1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
image size: 64x64 | CatUkraine/minetest-screenshots1 | [
"minetest",
"image generation",
"region:us"
]
| 2023-11-01T09:32:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 122904, "num_examples": 81}], "download_size": 88945, "dataset_size": 122904}, "tags": ["minetest", "image generation"]} | 2023-11-01T09:37:06+00:00 | []
| []
| TAGS
#minetest #image generation #region-us
| # Dataset Card for "minetest-screenshots1"
More Information needed
image size: 64x64 | [
"# Dataset Card for \"minetest-screenshots1\"\n\nMore Information needed\n\nimage size: 64x64"
]
| [
"TAGS\n#minetest #image generation #region-us \n",
"# Dataset Card for \"minetest-screenshots1\"\n\nMore Information needed\n\nimage size: 64x64"
]
| [
12,
23
]
| [
"passage: TAGS\n#minetest #image generation #region-us \n# Dataset Card for \"minetest-screenshots1\"\n\nMore Information needed\n\nimage size: 64x64"
]
|
72c9a46ffc9cf6d32a44c9513644f486982029b8 | # Dataset Card for "lmsys-lite"
This dataset is Lite Version of lmsys/lmsys-chat-1m and contains only english language and these models are filtered
- `gpt-3.5-turbo`
- `gpt-4`
- `llama-2-13b-chat`
- `llama-2-7b-chat`
- `mpt-30b-chat`
- `mpt-7b-chat`
- `palm-2`
- `vicuna-13b`
| erfanzar/lmsys-lite | [
"region:us"
]
| 2023-11-01T09:32:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "conversation_id", "dtype": "string"}, {"name": "openai_moderation", "list": [{"name": "categories", "struct": [{"name": "harassment", "dtype": "bool"}, {"name": "harassment/threatening", "dtype": "bool"}, {"name": "hate", "dtype": "bool"}, {"name": "hate/threatening", "dtype": "bool"}, {"name": "self-harm", "dtype": "bool"}, {"name": "self-harm/instructions", "dtype": "bool"}, {"name": "self-harm/intent", "dtype": "bool"}, {"name": "sexual", "dtype": "bool"}, {"name": "sexual/minors", "dtype": "bool"}, {"name": "violence", "dtype": "bool"}, {"name": "violence/graphic", "dtype": "bool"}]}, {"name": "category_scores", "struct": [{"name": "harassment", "dtype": "float64"}, {"name": "harassment/threatening", "dtype": "float64"}, {"name": "hate", "dtype": "float64"}, {"name": "hate/threatening", "dtype": "float64"}, {"name": "self-harm", "dtype": "float64"}, {"name": "self-harm/instructions", "dtype": "float64"}, {"name": "self-harm/intent", "dtype": "float64"}, {"name": "sexual", "dtype": "float64"}, {"name": "sexual/minors", "dtype": "float64"}, {"name": "violence", "dtype": "float64"}, {"name": "violence/graphic", "dtype": "float64"}]}, {"name": "flagged", "dtype": "bool"}]}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "list_conversation", "sequence": "string"}, {"name": "llama_2_prompt_style", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3447659164, "num_examples": 437224}], "download_size": 1688182571, "dataset_size": 3447659164}} | 2023-11-01T10:05:42+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "lmsys-lite"
This dataset is Lite Version of lmsys/lmsys-chat-1m and contains only english language and these models are filtered
- 'gpt-3.5-turbo'
- 'gpt-4'
- 'llama-2-13b-chat'
- 'llama-2-7b-chat'
- 'mpt-30b-chat'
- 'mpt-7b-chat'
- 'palm-2'
- 'vicuna-13b'
| [
"# Dataset Card for \"lmsys-lite\"\n\nThis dataset is Lite Version of lmsys/lmsys-chat-1m and contains only english language and these models are filtered\n\n- 'gpt-3.5-turbo'\n- 'gpt-4'\n- 'llama-2-13b-chat'\n- 'llama-2-7b-chat'\n- 'mpt-30b-chat'\n- 'mpt-7b-chat'\n- 'palm-2'\n- 'vicuna-13b'"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmsys-lite\"\n\nThis dataset is Lite Version of lmsys/lmsys-chat-1m and contains only english language and these models are filtered\n\n- 'gpt-3.5-turbo'\n- 'gpt-4'\n- 'llama-2-13b-chat'\n- 'llama-2-7b-chat'\n- 'mpt-30b-chat'\n- 'mpt-7b-chat'\n- 'palm-2'\n- 'vicuna-13b'"
]
| [
6,
107
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"lmsys-lite\"\n\nThis dataset is Lite Version of lmsys/lmsys-chat-1m and contains only english language and these models are filtered\n\n- 'gpt-3.5-turbo'\n- 'gpt-4'\n- 'llama-2-13b-chat'\n- 'llama-2-7b-chat'\n- 'mpt-30b-chat'\n- 'mpt-7b-chat'\n- 'palm-2'\n- 'vicuna-13b'"
]
|
a16d3495958b08679b8df59ba4d1ea9d2d9d1a1e | # Dataset Card for RAG-Instruct-Benchmark-Tester
### Dataset Summary
This is an updated benchmarking test dataset for "retrieval augmented generation" (RAG) use cases in the enterprise, especially for financial services, and legal. This test dataset includes 200 questions with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases,
contracts, invoices, technical articles, general news and short texts.
The questions are segmented into several categories for benchmarking evaluation:
-- **Core Q&A Evaluation** (Samples 0-99) - 100 samples - fact-based 'core' questions- used to assign a score between 0-100 based on correct responses.
-- **Not Found Classification** (Samples 100-119) - 20 samples - in each sample, the context passage does not contain a direct answer to the question, and the objective is to evaluate whether the model correctly identifies as "Not Found" or attempts to answer using information in the context.
-- **Boolean - Yes/No** (Samples 120-139) - 20 samples - each sample is a Yes/No question.
-- **Basic Math** (Samples 140-159) - 20 samples - these are "every day" math questions - basic increments, decrements, percentages, multiplications, sorting, and ranking with amounts and times.
-- **Complex Q&A** (Samples 160-179) - 20 samples - tests several distinct 'complex q&a' skills - multiple-choice, financial table reading, multi-part extractions, causal, and logical selections.
-- **Summary** (Sample 180-199) - 20 samples - tests long-form and short-form summarization.
### Representative Questions
-- What are the payment terms?
-- What was the improvement in operating income year-to-year?
-- According to the CFO, what led to the increase in cloud revenue?
-- Who owns the intellectual property?
-- What is the notice period to terminate for convenience?
-- How often will the Board review the salary?
-- What section of the agreement defines the governing law?
-- How many jobs were predicted by economists?
-- How many shares were repurchased in second quarter?
-- What was the percentage increase in data center revenue compared to the first quarter?
-- When will the next dividend be paid?
-- Is the expected gross margin greater than 70%?
-- What is the amount of expected non-GAAP operating expense?
-- What did economists expect for the trade surplus amount?
-- What is Bank of Americas' rating on Snowflake?
-- Has the S&P increased over the last year? (Yes/No)
-- Is Barclay's raising its price target on KHC? (Yes/No)
-- Were third quarter sales more than 5 billion euros? (Yes/No)
-- If automotive revenue increases by $100 million in third quarter, what will be the amount? (Math)
-- If the rent increased by 50%, what is the new rental price? (Math)
-- Which stock index increased by the most points yesterday? (Math)
-- Why did the intraday reversal occur? (Complex)
-- What is a list of the top 3 financial highlights for the quarter? (Summary)
-- What is a list of the top 5 summary points? (Summary)
-- What is a summary of the CEO's statement in 15 words or less? (Summary)
-- What are the key terms of the invoice? (Summary)
### Languages
English
## Dataset Structure
200 JSONL samples with 6 keys - "query" | "context" | "answer" | "category" | "tokens" | "sample_number"
Note: this dataset includes elements from test_dataset_0.1 and test_dataset2_financial- and is intended to replace them for benchmarking evaluations.
### Personal and Sensitive Information
The dataset samples were written bespoke for this objective, derived from publicly-available sources and/or originally-written samples.
## Dataset Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in more information about this project. | llmware/rag_instruct_benchmark_tester | [
"license:apache-2.0",
"financial services",
"retrieval augmented generation",
"RAG",
"q&a instruct",
"region:us"
]
| 2023-11-01T09:34:23+00:00 | {"license": "apache-2.0", "pretty_name": "RAG Instruct Benchmarking Test Dataset", "tags": ["financial services", "retrieval augmented generation", "RAG", "q&a instruct"]} | 2023-11-04T09:54:25+00:00 | []
| []
| TAGS
#license-apache-2.0 #financial services #retrieval augmented generation #RAG #q&a instruct #region-us
| # Dataset Card for RAG-Instruct-Benchmark-Tester
### Dataset Summary
This is an updated benchmarking test dataset for "retrieval augmented generation" (RAG) use cases in the enterprise, especially for financial services, and legal. This test dataset includes 200 questions with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases,
contracts, invoices, technical articles, general news and short texts.
The questions are segmented into several categories for benchmarking evaluation:
-- Core Q&A Evaluation (Samples 0-99) - 100 samples - fact-based 'core' questions- used to assign a score between 0-100 based on correct responses.
-- Not Found Classification (Samples 100-119) - 20 samples - in each sample, the context passage does not contain a direct answer to the question, and the objective is to evaluate whether the model correctly identifies as "Not Found" or attempts to answer using information in the context.
-- Boolean - Yes/No (Samples 120-139) - 20 samples - each sample is a Yes/No question.
-- Basic Math (Samples 140-159) - 20 samples - these are "every day" math questions - basic increments, decrements, percentages, multiplications, sorting, and ranking with amounts and times.
-- Complex Q&A (Samples 160-179) - 20 samples - tests several distinct 'complex q&a' skills - multiple-choice, financial table reading, multi-part extractions, causal, and logical selections.
-- Summary (Sample 180-199) - 20 samples - tests long-form and short-form summarization.
### Representative Questions
-- What are the payment terms?
-- What was the improvement in operating income year-to-year?
-- According to the CFO, what led to the increase in cloud revenue?
-- Who owns the intellectual property?
-- What is the notice period to terminate for convenience?
-- How often will the Board review the salary?
-- What section of the agreement defines the governing law?
-- How many jobs were predicted by economists?
-- How many shares were repurchased in second quarter?
-- What was the percentage increase in data center revenue compared to the first quarter?
-- When will the next dividend be paid?
-- Is the expected gross margin greater than 70%?
-- What is the amount of expected non-GAAP operating expense?
-- What did economists expect for the trade surplus amount?
-- What is Bank of Americas' rating on Snowflake?
-- Has the S&P increased over the last year? (Yes/No)
-- Is Barclay's raising its price target on KHC? (Yes/No)
-- Were third quarter sales more than 5 billion euros? (Yes/No)
-- If automotive revenue increases by $100 million in third quarter, what will be the amount? (Math)
-- If the rent increased by 50%, what is the new rental price? (Math)
-- Which stock index increased by the most points yesterday? (Math)
-- Why did the intraday reversal occur? (Complex)
-- What is a list of the top 3 financial highlights for the quarter? (Summary)
-- What is a list of the top 5 summary points? (Summary)
-- What is a summary of the CEO's statement in 15 words or less? (Summary)
-- What are the key terms of the invoice? (Summary)
### Languages
English
## Dataset Structure
200 JSONL samples with 6 keys - "query" | "context" | "answer" | "category" | "tokens" | "sample_number"
Note: this dataset includes elements from test_dataset_0.1 and test_dataset2_financial- and is intended to replace them for benchmarking evaluations.
### Personal and Sensitive Information
The dataset samples were written bespoke for this objective, derived from publicly-available sources and/or originally-written samples.
## Dataset Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in more information about this project. | [
"# Dataset Card for RAG-Instruct-Benchmark-Tester",
"### Dataset Summary\n\nThis is an updated benchmarking test dataset for \"retrieval augmented generation\" (RAG) use cases in the enterprise, especially for financial services, and legal. This test dataset includes 200 questions with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases,\ncontracts, invoices, technical articles, general news and short texts. \n\nThe questions are segmented into several categories for benchmarking evaluation: \n\n-- Core Q&A Evaluation (Samples 0-99) - 100 samples - fact-based 'core' questions- used to assign a score between 0-100 based on correct responses. \n\n-- Not Found Classification (Samples 100-119) - 20 samples - in each sample, the context passage does not contain a direct answer to the question, and the objective is to evaluate whether the model correctly identifies as \"Not Found\" or attempts to answer using information in the context. \n\n-- Boolean - Yes/No (Samples 120-139) - 20 samples - each sample is a Yes/No question. \n\n-- Basic Math (Samples 140-159) - 20 samples - these are \"every day\" math questions - basic increments, decrements, percentages, multiplications, sorting, and ranking with amounts and times. \n\n-- Complex Q&A (Samples 160-179) - 20 samples - tests several distinct 'complex q&a' skills - multiple-choice, financial table reading, multi-part extractions, causal, and logical selections. \n\n-- Summary (Sample 180-199) - 20 samples - tests long-form and short-form summarization.",
"### Representative Questions \n\n-- What are the payment terms? \n-- What was the improvement in operating income year-to-year? \n-- According to the CFO, what led to the increase in cloud revenue? \n-- Who owns the intellectual property? \n-- What is the notice period to terminate for convenience? \n-- How often will the Board review the salary? \n-- What section of the agreement defines the governing law? \n-- How many jobs were predicted by economists? \n-- How many shares were repurchased in second quarter? \n-- What was the percentage increase in data center revenue compared to the first quarter? \n-- When will the next dividend be paid? \n-- Is the expected gross margin greater than 70%? \n-- What is the amount of expected non-GAAP operating expense? \n-- What did economists expect for the trade surplus amount? \n-- What is Bank of Americas' rating on Snowflake? \n-- Has the S&P increased over the last year? (Yes/No) \n-- Is Barclay's raising its price target on KHC? (Yes/No) \n-- Were third quarter sales more than 5 billion euros? (Yes/No) \n-- If automotive revenue increases by $100 million in third quarter, what will be the amount? (Math) \n-- If the rent increased by 50%, what is the new rental price? (Math) \n-- Which stock index increased by the most points yesterday? (Math) \n-- Why did the intraday reversal occur? (Complex) \n-- What is a list of the top 3 financial highlights for the quarter? (Summary) \n-- What is a list of the top 5 summary points? (Summary) \n-- What is a summary of the CEO's statement in 15 words or less? (Summary) \n-- What are the key terms of the invoice? (Summary)",
"### Languages\n\nEnglish",
"## Dataset Structure\n\n200 JSONL samples with 6 keys - \"query\" | \"context\" | \"answer\" | \"category\" | \"tokens\" | \"sample_number\" \n\nNote: this dataset includes elements from test_dataset_0.1 and test_dataset2_financial- and is intended to replace them for benchmarking evaluations.",
"### Personal and Sensitive Information\n\nThe dataset samples were written bespoke for this objective, derived from publicly-available sources and/or originally-written samples.",
"## Dataset Card Contact\n\nDarren Oberst & llmware team\n\nPlease reach out anytime if you are interested in more information about this project."
]
| [
"TAGS\n#license-apache-2.0 #financial services #retrieval augmented generation #RAG #q&a instruct #region-us \n",
"# Dataset Card for RAG-Instruct-Benchmark-Tester",
"### Dataset Summary\n\nThis is an updated benchmarking test dataset for \"retrieval augmented generation\" (RAG) use cases in the enterprise, especially for financial services, and legal. This test dataset includes 200 questions with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases,\ncontracts, invoices, technical articles, general news and short texts. \n\nThe questions are segmented into several categories for benchmarking evaluation: \n\n-- Core Q&A Evaluation (Samples 0-99) - 100 samples - fact-based 'core' questions- used to assign a score between 0-100 based on correct responses. \n\n-- Not Found Classification (Samples 100-119) - 20 samples - in each sample, the context passage does not contain a direct answer to the question, and the objective is to evaluate whether the model correctly identifies as \"Not Found\" or attempts to answer using information in the context. \n\n-- Boolean - Yes/No (Samples 120-139) - 20 samples - each sample is a Yes/No question. \n\n-- Basic Math (Samples 140-159) - 20 samples - these are \"every day\" math questions - basic increments, decrements, percentages, multiplications, sorting, and ranking with amounts and times. \n\n-- Complex Q&A (Samples 160-179) - 20 samples - tests several distinct 'complex q&a' skills - multiple-choice, financial table reading, multi-part extractions, causal, and logical selections. \n\n-- Summary (Sample 180-199) - 20 samples - tests long-form and short-form summarization.",
"### Representative Questions \n\n-- What are the payment terms? \n-- What was the improvement in operating income year-to-year? \n-- According to the CFO, what led to the increase in cloud revenue? \n-- Who owns the intellectual property? \n-- What is the notice period to terminate for convenience? \n-- How often will the Board review the salary? \n-- What section of the agreement defines the governing law? \n-- How many jobs were predicted by economists? \n-- How many shares were repurchased in second quarter? \n-- What was the percentage increase in data center revenue compared to the first quarter? \n-- When will the next dividend be paid? \n-- Is the expected gross margin greater than 70%? \n-- What is the amount of expected non-GAAP operating expense? \n-- What did economists expect for the trade surplus amount? \n-- What is Bank of Americas' rating on Snowflake? \n-- Has the S&P increased over the last year? (Yes/No) \n-- Is Barclay's raising its price target on KHC? (Yes/No) \n-- Were third quarter sales more than 5 billion euros? (Yes/No) \n-- If automotive revenue increases by $100 million in third quarter, what will be the amount? (Math) \n-- If the rent increased by 50%, what is the new rental price? (Math) \n-- Which stock index increased by the most points yesterday? (Math) \n-- Why did the intraday reversal occur? (Complex) \n-- What is a list of the top 3 financial highlights for the quarter? (Summary) \n-- What is a list of the top 5 summary points? (Summary) \n-- What is a summary of the CEO's statement in 15 words or less? (Summary) \n-- What are the key terms of the invoice? (Summary)",
"### Languages\n\nEnglish",
"## Dataset Structure\n\n200 JSONL samples with 6 keys - \"query\" | \"context\" | \"answer\" | \"category\" | \"tokens\" | \"sample_number\" \n\nNote: this dataset includes elements from test_dataset_0.1 and test_dataset2_financial- and is intended to replace them for benchmarking evaluations.",
"### Personal and Sensitive Information\n\nThe dataset samples were written bespoke for this objective, derived from publicly-available sources and/or originally-written samples.",
"## Dataset Card Contact\n\nDarren Oberst & llmware team\n\nPlease reach out anytime if you are interested in more information about this project."
]
| [
35,
17,
368,
395,
5,
91,
43,
30
]
| [
"passage: TAGS\n#license-apache-2.0 #financial services #retrieval augmented generation #RAG #q&a instruct #region-us \n# Dataset Card for RAG-Instruct-Benchmark-Tester### Dataset Summary\n\nThis is an updated benchmarking test dataset for \"retrieval augmented generation\" (RAG) use cases in the enterprise, especially for financial services, and legal. This test dataset includes 200 questions with context passages pulled from common 'retrieval scenarios', e.g., financial news, earnings releases,\ncontracts, invoices, technical articles, general news and short texts. \n\nThe questions are segmented into several categories for benchmarking evaluation: \n\n-- Core Q&A Evaluation (Samples 0-99) - 100 samples - fact-based 'core' questions- used to assign a score between 0-100 based on correct responses. \n\n-- Not Found Classification (Samples 100-119) - 20 samples - in each sample, the context passage does not contain a direct answer to the question, and the objective is to evaluate whether the model correctly identifies as \"Not Found\" or attempts to answer using information in the context. \n\n-- Boolean - Yes/No (Samples 120-139) - 20 samples - each sample is a Yes/No question. \n\n-- Basic Math (Samples 140-159) - 20 samples - these are \"every day\" math questions - basic increments, decrements, percentages, multiplications, sorting, and ranking with amounts and times. \n\n-- Complex Q&A (Samples 160-179) - 20 samples - tests several distinct 'complex q&a' skills - multiple-choice, financial table reading, multi-part extractions, causal, and logical selections. \n\n-- Summary (Sample 180-199) - 20 samples - tests long-form and short-form summarization."
]
|
8319bd5552523e758e1d69ac744184da68d724be | # Dataset Card for "dataset-farma-test3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marziye-A/dataset-farma-test3 | [
"region:us"
]
| 2023-11-01T09:51:51+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74308913.54, "num_examples": 2005}], "download_size": 72537312, "dataset_size": 74308913.54}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-01T10:15:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset-farma-test3"
More Information needed | [
"# Dataset Card for \"dataset-farma-test3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-farma-test3\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset-farma-test3\"\n\nMore Information needed"
]
|
e29456386d8e2135d818289a5df8748463961a2a | # Dataset Card for "iban_speech_corpus"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** The original dataset is found on [Sarah Juan's github link](https://github.com/sarahjuan/iban)
- **Paper:** "Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban"
### Dataset Summary
This Iban speech corpus is used for training of a Automatic Speech Recognition (ASR) model. This dataset contains the audio files (wav files) with its corresponding transcription.
For other resources such as pronunciation dictionary and Iban language model, please refer to the original dataset respository [here](https://github.com/sarahjuan/iban).
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
```python
from datasets import load_dataset
dataset = load_dataset("meisin123/iban_speech_corpus", split="train")
```
## Dataset Structure
### Data Instances
```
{'audio': {'path': 'ibf_001_001.wav',
'array': array([ 5.72814941e-01, 5.49011230e-01, -1.82495117e-02, ...,
-2.31628418e-02, -1.26342773e-02, -3.05175781e-05]),
'sampling_rate': 16000},
'transcription': 'pukul sepuluh malam'}
```
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate.
- transcription: the transcription of the audio file.
## Dataset Creation
- Iban Data collected by Sarah Samson Juan and Laurent Besacier
### Source Data
The audio files are news data provided by a local radio station in Sarawak, Malaysia.
## Additional Information
### Citation Information
Details on the corpora and the experiments on iban ASR can be found in the following list of publication. The original authors appreciate if you cite them if you intend to publish.
```
@inproceedings{Juan14,
Author = {Sarah Samson Juan and Laurent Besacier and Solange Rossato},
Booktitle = {Proceedings of Workshop for Spoken Language Technology for Under-resourced (SLTU)},
Month = {May},
Title = {Semi-supervised G2P bootstrapping and its application to ASR for a very under-resourced language: Iban},
Year = {2014}}
@inproceedings{Juan2015,
Title = {Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban},
Author = {Sarah Samson Juan and Laurent Besacier and Benjamin Lecouteux and Mohamed Dyab},
Booktitle = {Proceedings of INTERSPEECH},
Year = {2015},
Address = {Dresden, Germany},
Month = {September}}
```
### Contributions
Thanks to [meisin](https://github.com/meisin) for adding this dataset.
| meisin123/iban_speech_corpus | [
"region:us"
]
| 2023-11-01T10:12:03+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1014986154.58, "num_examples": 3132}], "download_size": 981436514, "dataset_size": 1014986154.58}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-02T04:39:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "iban_speech_corpus"
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- How to use
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Source Data
- Additional Information
- Citation Information
- Contributions
## Dataset Description
- Repository: The original dataset is found on Sarah Juan's github link
- Paper: "Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban"
### Dataset Summary
This Iban speech corpus is used for training of a Automatic Speech Recognition (ASR) model. This dataset contains the audio files (wav files) with its corresponding transcription.
For other resources such as pronunciation dictionary and Iban language model, please refer to the original dataset respository here.
### How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
## Dataset Structure
### Data Instances
### Data Fields
- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate.
- transcription: the transcription of the audio file.
## Dataset Creation
- Iban Data collected by Sarah Samson Juan and Laurent Besacier
### Source Data
The audio files are news data provided by a local radio station in Sarawak, Malaysia.
## Additional Information
Details on the corpora and the experiments on iban ASR can be found in the following list of publication. The original authors appreciate if you cite them if you intend to publish.
### Contributions
Thanks to meisin for adding this dataset.
| [
"# Dataset Card for \"iban_speech_corpus\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Source Data\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: The original dataset is found on Sarah Juan's github link\n- Paper: \"Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban\"",
"### Dataset Summary\nThis Iban speech corpus is used for training of a Automatic Speech Recognition (ASR) model. This dataset contains the audio files (wav files) with its corresponding transcription.\n\nFor other resources such as pronunciation dictionary and Iban language model, please refer to the original dataset respository here.",
"### How to use\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate.\n\n- transcription: the transcription of the audio file.",
"## Dataset Creation\n- Iban Data collected by Sarah Samson Juan and Laurent Besacier",
"### Source Data\nThe audio files are news data provided by a local radio station in Sarawak, Malaysia.",
"## Additional Information\n\n\nDetails on the corpora and the experiments on iban ASR can be found in the following list of publication. The original authors appreciate if you cite them if you intend to publish.",
"### Contributions\n\nThanks to meisin for adding this dataset."
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"iban_speech_corpus\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Source Data\n- Additional Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: The original dataset is found on Sarah Juan's github link\n- Paper: \"Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban\"",
"### Dataset Summary\nThis Iban speech corpus is used for training of a Automatic Speech Recognition (ASR) model. This dataset contains the audio files (wav files) with its corresponding transcription.\n\nFor other resources such as pronunciation dictionary and Iban language model, please refer to the original dataset respository here.",
"### How to use\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate.\n\n- transcription: the transcription of the audio file.",
"## Dataset Creation\n- Iban Data collected by Sarah Samson Juan and Laurent Besacier",
"### Source Data\nThe audio files are news data provided by a local radio station in Sarawak, Malaysia.",
"## Additional Information\n\n\nDetails on the corpora and the experiments on iban ASR can be found in the following list of publication. The original authors appreciate if you cite them if you intend to publish.",
"### Contributions\n\nThanks to meisin for adding this dataset."
]
| [
6,
15,
59,
61,
77,
58,
6,
6,
46,
21,
21,
43,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"iban_speech_corpus\"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Source Data\n- Additional Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: The original dataset is found on Sarah Juan's github link\n- Paper: \"Using Resources from a closely-Related language to develop ASR for a very under-resourced Language: A case study for Iban\"### Dataset Summary\nThis Iban speech corpus is used for training of a Automatic Speech Recognition (ASR) model. This dataset contains the audio files (wav files) with its corresponding transcription.\n\nFor other resources such as pronunciation dictionary and Iban language model, please refer to the original dataset respository here.### How to use\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.## Dataset Structure### Data Instances### Data Fields\n\n- audio: A dictionary containing the audio filename, the decoded audio array, and the sampling rate.\n\n- transcription: the transcription of the audio file.## Dataset Creation\n- Iban Data collected by Sarah Samson Juan and Laurent Besacier### Source Data\nThe audio files are news data provided by a local radio station in Sarawak, Malaysia.## Additional Information\n\n\nDetails on the corpora and the experiments on iban ASR can be found in the following list of publication. The original authors appreciate if you cite them if you intend to publish.### Contributions\n\nThanks to meisin for adding this dataset."
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.