sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
6576c50d887169c6a0e4c9d5b9580892fd2bda4f
# Dataset Card for "my_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
316usman/my_dataset
[ "region:us" ]
2023-10-28T12:59:51+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 31, "num_examples": 1}], "download_size": 1349, "dataset_size": 31}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T12:59:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "my_dataset" More Information needed
[ "# Dataset Card for \"my_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"my_dataset\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"my_dataset\"\n\nMore Information needed" ]
e5c834d171014b38be28010526fc888ebc500cf2
## About COPAL-ID COPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the Indonesian (especially Jakartan) cultural sphere. Professionally written and validatid from scratch by natives, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID. COPAL-ID is a test set only, intended to be used as a benchmark. For more details, please see [our paper](https://arxiv.org/abs/2311.01012). ### Local Nuances Categories Our dataset consists of 3 subcategories: local-term, culture, and language reasoning. - Local-term captures common knowledge for Indonesians that is most likely unknown or uncommon for non-natives, e.g., local foods, public figures, abbreviations, and other local concepts. - Culture captures norms used in Indonesia. - Language captures the reasoning for the language itself, for example, local idioms, figures of speech, as well as ambiguous words. Specifically, the distribution of COPAL-ID across these categories is: ### Colloquial vs Standard Indonesian In daily scenarios, almost no one in Indonesia uses purely formal Indonesian. Yet, many NLP datasets use formal Indonesian. This surely causes a domain mismatch with real-case settings. To accommodate this, COPAL-ID is written in two variations: Standard Indonesian and Colloquial Indonesian. If you use COPAL-ID to benchmark your model, we suggest testing on both variants. Generally, colloquial Indonesian is harder for models to handle. ## How to Use ```py from datasets import load_dataset copal_id_dataset = load_dataset('haryoaw/COPAL', 'id', subset='test') copal_id_colloquial_dataset = load_dataset('haryoaw/COPAL', 'id', subset='test_colloquial') ``` ## Data Collection and Human Performance COPAL-ID was created through a rigorous data collection pipeline. Each example is written and checked by natives accustomed to Jakartan culture. Lastly, we have run a human benchmark performance test across native Jakartans, in which they achieved an average accuracy of ~95% in both formal and colloquial Indonesian variants, noting that this dataset is trivially easy for those familiar with the culture and local nuances of Indonesia, especially in Jakarta. For more details, please see our paper. ## Limitation Indonesia is a vast country with over 700+ languages and rich in culture. Therefore, it is impossible to pinpoint a singular culture. Our dataset is specifically designed to capture Jakarta's (the capital) local nuances. Expanding to different local nuances and languages across Indonesia is a future work. ## Cite Our Work ``` @article{wibowo2023copal, title={COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances}, author={Wibowo, Haryo Akbarianto and Fuadi, Erland Hilman and Nityasya, Made Nindyatama and Prasojo, Radityo Eko and Aji, Alham Fikri}, journal={arXiv preprint arXiv:2311.01012}, year={2023} } ```
haryoaw/COPAL
[ "task_categories:multiple-choice", "size_categories:n<1K", "language:id", "license:cc-by-sa-4.0", "arxiv:2311.01012", "region:us" ]
2023-10-28T13:35:55+00:00
{"language": ["id"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["multiple-choice"], "configs": [{"config_name": "id", "data_files": [{"split": "test", "path": "test_copal.csv"}, {"split": "test_colloquial", "path": "test_copal_colloquial.csv"}]}]}
2023-12-10T08:37:55+00:00
[ "2311.01012" ]
[ "id" ]
TAGS #task_categories-multiple-choice #size_categories-n<1K #language-Indonesian #license-cc-by-sa-4.0 #arxiv-2311.01012 #region-us
## About COPAL-ID COPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the Indonesian (especially Jakartan) cultural sphere. Professionally written and validatid from scratch by natives, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID. COPAL-ID is a test set only, intended to be used as a benchmark. For more details, please see our paper. ### Local Nuances Categories Our dataset consists of 3 subcategories: local-term, culture, and language reasoning. - Local-term captures common knowledge for Indonesians that is most likely unknown or uncommon for non-natives, e.g., local foods, public figures, abbreviations, and other local concepts. - Culture captures norms used in Indonesia. - Language captures the reasoning for the language itself, for example, local idioms, figures of speech, as well as ambiguous words. Specifically, the distribution of COPAL-ID across these categories is: ### Colloquial vs Standard Indonesian In daily scenarios, almost no one in Indonesia uses purely formal Indonesian. Yet, many NLP datasets use formal Indonesian. This surely causes a domain mismatch with real-case settings. To accommodate this, COPAL-ID is written in two variations: Standard Indonesian and Colloquial Indonesian. If you use COPAL-ID to benchmark your model, we suggest testing on both variants. Generally, colloquial Indonesian is harder for models to handle. ## How to Use ## Data Collection and Human Performance COPAL-ID was created through a rigorous data collection pipeline. Each example is written and checked by natives accustomed to Jakartan culture. Lastly, we have run a human benchmark performance test across native Jakartans, in which they achieved an average accuracy of ~95% in both formal and colloquial Indonesian variants, noting that this dataset is trivially easy for those familiar with the culture and local nuances of Indonesia, especially in Jakarta. For more details, please see our paper. ## Limitation Indonesia is a vast country with over 700+ languages and rich in culture. Therefore, it is impossible to pinpoint a singular culture. Our dataset is specifically designed to capture Jakarta's (the capital) local nuances. Expanding to different local nuances and languages across Indonesia is a future work. ## Cite Our Work
[ "## About COPAL-ID\n\nCOPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the Indonesian (especially Jakartan) cultural sphere. Professionally written and validatid from scratch by natives, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID.\n\nCOPAL-ID is a test set only, intended to be used as a benchmark.\nFor more details, please see our paper.", "### Local Nuances Categories\nOur dataset consists of 3 subcategories: local-term, culture, and language reasoning.\n - Local-term captures common knowledge for Indonesians that is most likely unknown or uncommon for non-natives, e.g., local foods, public figures, abbreviations, and other local concepts.\n - Culture captures norms used in Indonesia.\n - Language captures the reasoning for the language itself, for example, local idioms, figures of speech, as well as ambiguous words.\nSpecifically, the distribution of COPAL-ID across these categories is:", "### Colloquial vs Standard Indonesian\nIn daily scenarios, almost no one in Indonesia uses purely formal Indonesian. Yet, many NLP datasets use formal Indonesian. This surely causes a domain mismatch with real-case settings. To accommodate this, COPAL-ID is written in two variations: Standard Indonesian and Colloquial Indonesian. If you use COPAL-ID to benchmark your model, we suggest testing on both variants. Generally, colloquial Indonesian is harder for models to handle.", "## How to Use", "## Data Collection and Human Performance\n\nCOPAL-ID was created through a rigorous data collection pipeline. Each example is written and checked by natives accustomed to Jakartan culture. Lastly, we have run a human benchmark performance test across native Jakartans, in which they achieved an average accuracy of ~95% in both formal and colloquial Indonesian variants, noting that this dataset is trivially easy for those familiar with the culture and local nuances of Indonesia, especially in Jakarta.\n\nFor more details, please see our paper.", "## Limitation\n\nIndonesia is a vast country with over 700+ languages and rich in culture. Therefore, it is impossible to pinpoint a singular culture. Our dataset is specifically designed to capture Jakarta's (the capital) local nuances. Expanding to different local nuances and languages across Indonesia is a future work.", "## Cite Our Work" ]
[ "TAGS\n#task_categories-multiple-choice #size_categories-n<1K #language-Indonesian #license-cc-by-sa-4.0 #arxiv-2311.01012 #region-us \n", "## About COPAL-ID\n\nCOPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the Indonesian (especially Jakartan) cultural sphere. Professionally written and validatid from scratch by natives, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID.\n\nCOPAL-ID is a test set only, intended to be used as a benchmark.\nFor more details, please see our paper.", "### Local Nuances Categories\nOur dataset consists of 3 subcategories: local-term, culture, and language reasoning.\n - Local-term captures common knowledge for Indonesians that is most likely unknown or uncommon for non-natives, e.g., local foods, public figures, abbreviations, and other local concepts.\n - Culture captures norms used in Indonesia.\n - Language captures the reasoning for the language itself, for example, local idioms, figures of speech, as well as ambiguous words.\nSpecifically, the distribution of COPAL-ID across these categories is:", "### Colloquial vs Standard Indonesian\nIn daily scenarios, almost no one in Indonesia uses purely formal Indonesian. Yet, many NLP datasets use formal Indonesian. This surely causes a domain mismatch with real-case settings. To accommodate this, COPAL-ID is written in two variations: Standard Indonesian and Colloquial Indonesian. If you use COPAL-ID to benchmark your model, we suggest testing on both variants. Generally, colloquial Indonesian is harder for models to handle.", "## How to Use", "## Data Collection and Human Performance\n\nCOPAL-ID was created through a rigorous data collection pipeline. Each example is written and checked by natives accustomed to Jakartan culture. Lastly, we have run a human benchmark performance test across native Jakartans, in which they achieved an average accuracy of ~95% in both formal and colloquial Indonesian variants, noting that this dataset is trivially easy for those familiar with the culture and local nuances of Indonesia, especially in Jakarta.\n\nFor more details, please see our paper.", "## Limitation\n\nIndonesia is a vast country with over 700+ languages and rich in culture. Therefore, it is impossible to pinpoint a singular culture. Our dataset is specifically designed to capture Jakarta's (the capital) local nuances. Expanding to different local nuances and languages across Indonesia is a future work.", "## Cite Our Work" ]
[ 53, 130, 138, 116, 4, 117, 68, 5 ]
[ "passage: TAGS\n#task_categories-multiple-choice #size_categories-n<1K #language-Indonesian #license-cc-by-sa-4.0 #arxiv-2311.01012 #region-us \n## About COPAL-ID\n\nCOPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the Indonesian (especially Jakartan) cultural sphere. Professionally written and validatid from scratch by natives, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID.\n\nCOPAL-ID is a test set only, intended to be used as a benchmark.\nFor more details, please see our paper.### Local Nuances Categories\nOur dataset consists of 3 subcategories: local-term, culture, and language reasoning.\n - Local-term captures common knowledge for Indonesians that is most likely unknown or uncommon for non-natives, e.g., local foods, public figures, abbreviations, and other local concepts.\n - Culture captures norms used in Indonesia.\n - Language captures the reasoning for the language itself, for example, local idioms, figures of speech, as well as ambiguous words.\nSpecifically, the distribution of COPAL-ID across these categories is:### Colloquial vs Standard Indonesian\nIn daily scenarios, almost no one in Indonesia uses purely formal Indonesian. Yet, many NLP datasets use formal Indonesian. This surely causes a domain mismatch with real-case settings. To accommodate this, COPAL-ID is written in two variations: Standard Indonesian and Colloquial Indonesian. If you use COPAL-ID to benchmark your model, we suggest testing on both variants. Generally, colloquial Indonesian is harder for models to handle.## How to Use" ]
da134ff5f07f50f8295b676cd50fb78647eb68bd
# Dataset Card for "soict_train_dataset_all" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
thanhduycao/soict_train_dataset_all
[ "region:us" ]
2023-10-28T14:18:36+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence_norm", "dtype": "string"}, {"name": "wer", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 4845993689.0, "num_examples": 9498}, {"name": "test", "num_bytes": 491716, "num_examples": 1}], "download_size": 1887544391, "dataset_size": 4846485405.0}}
2023-10-28T14:20:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "soict_train_dataset_all" More Information needed
[ "# Dataset Card for \"soict_train_dataset_all\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"soict_train_dataset_all\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"soict_train_dataset_all\"\n\nMore Information needed" ]
4c850c0e3becaac5f95b9a2fbad408cb653ceeae
# Dataset Card for "ds_rplanpy_floorplan_to_color" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ekuhn/ds_rplanpy_floorplan_to_color
[ "region:us" ]
2023-10-28T14:29:01+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "num_rooms", "dtype": "int64"}, {"name": "img", "struct": [{"name": "bytes", "dtype": "binary"}, {"name": "path", "dtype": "null"}]}], "splits": [{"name": "train", "num_bytes": 43577557, "num_examples": 36850}, {"name": "val", "num_bytes": 10892608, "num_examples": 9213}], "download_size": 28743292, "dataset_size": 54470165}}
2023-10-28T14:29:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ds_rplanpy_floorplan_to_color" More Information needed
[ "# Dataset Card for \"ds_rplanpy_floorplan_to_color\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ds_rplanpy_floorplan_to_color\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ds_rplanpy_floorplan_to_color\"\n\nMore Information needed" ]
26466e25f602e37e2245fa1c8ee6b9fe9ea316b0
# Dataset Card for "test_es" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gianma/test_es
[ "region:us" ]
2023-10-28T14:37:33+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "prompt_idx", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4333219, "num_examples": 234}, {"name": "test", "num_bytes": 491267, "num_examples": 27}], "download_size": 2261791, "dataset_size": 4824486}}
2023-10-28T14:37:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_es" More Information needed
[ "# Dataset Card for \"test_es\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_es\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test_es\"\n\nMore Information needed" ]
7baa19484718e0103f1d4e1ccf3033ea53c2118a
# Dataset Card for "Instruct-Recharts-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Brandoko/Instruct-Recharts-v2
[ "region:us" ]
2023-10-28T14:44:20+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1453192, "num_examples": 623}], "download_size": 409363, "dataset_size": 1453192}}
2023-10-28T14:44:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Instruct-Recharts-v2" More Information needed
[ "# Dataset Card for \"Instruct-Recharts-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Instruct-Recharts-v2\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Instruct-Recharts-v2\"\n\nMore Information needed" ]
f43cd2ee0bdea6f82512b135b663eb2aca94d2ba
# Hi-ToM Dataset This is the dataset for the paper "Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models". <img src=media/Picture1.png height=430> ### The `Hi-ToM_data` folder Contains ToMh data consisting of story-question pairs and the corresponding answers. The names of subfolder branches have the following meanings: - `Tell` / `No_Tell`: whether or not the stories contain communications among agents. - `MC` / `CoT`: the prompting style. `MC` corresponds to Vanilla Prompting (VP) in the paper, while `CoT` stands for Chain-of-Thought Prompting (CoTP). - `length_n`: the story length, i.e. the number of chapters in a story. From 1 to 3. - `sample_n`: the numbering of different sample stories. - `order_n`: the ToM order of the question. From 0 to 4. ### The `Hi-ToM_prompt` folder Contains prompt files that can be directly input to API. The data in it are almost the same as `Hi-ToM_data`, except that answers are eliminated. ### Generate new data and prompts Run the script `generate_tomh.sh`.
Hi-ToM/Hi-ToM_Dataset
[ "region:us" ]
2023-10-28T14:48:30+00:00
{}
2023-10-29T04:32:30+00:00
[]
[]
TAGS #region-us
# Hi-ToM Dataset This is the dataset for the paper "Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models". <img src=media/URL height=430> ### The 'Hi-ToM_data' folder Contains ToMh data consisting of story-question pairs and the corresponding answers. The names of subfolder branches have the following meanings: - 'Tell' / 'No_Tell': whether or not the stories contain communications among agents. - 'MC' / 'CoT': the prompting style. 'MC' corresponds to Vanilla Prompting (VP) in the paper, while 'CoT' stands for Chain-of-Thought Prompting (CoTP). - 'length_n': the story length, i.e. the number of chapters in a story. From 1 to 3. - 'sample_n': the numbering of different sample stories. - 'order_n': the ToM order of the question. From 0 to 4. ### The 'Hi-ToM_prompt' folder Contains prompt files that can be directly input to API. The data in it are almost the same as 'Hi-ToM_data', except that answers are eliminated. ### Generate new data and prompts Run the script 'generate_tomh.sh'.
[ "# Hi-ToM Dataset\n\nThis is the dataset for the paper \"Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models\".\n\n<img src=media/URL height=430>", "### The 'Hi-ToM_data' folder\n\nContains ToMh data consisting of story-question pairs and the corresponding answers.\nThe names of subfolder branches have the following meanings:\n\n- 'Tell' / 'No_Tell': whether or not the stories contain communications among agents.\n- 'MC' / 'CoT': the prompting style. 'MC' corresponds to Vanilla Prompting (VP) in the paper, while 'CoT' stands for Chain-of-Thought Prompting (CoTP).\n- 'length_n': the story length, i.e. the number of chapters in a story. From 1 to 3.\n- 'sample_n': the numbering of different sample stories.\n- 'order_n': the ToM order of the question. From 0 to 4.", "### The 'Hi-ToM_prompt' folder\n\nContains prompt files that can be directly input to API.\nThe data in it are almost the same as 'Hi-ToM_data', except that answers are eliminated.", "### Generate new data and prompts\n\nRun the script 'generate_tomh.sh'." ]
[ "TAGS\n#region-us \n", "# Hi-ToM Dataset\n\nThis is the dataset for the paper \"Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models\".\n\n<img src=media/URL height=430>", "### The 'Hi-ToM_data' folder\n\nContains ToMh data consisting of story-question pairs and the corresponding answers.\nThe names of subfolder branches have the following meanings:\n\n- 'Tell' / 'No_Tell': whether or not the stories contain communications among agents.\n- 'MC' / 'CoT': the prompting style. 'MC' corresponds to Vanilla Prompting (VP) in the paper, while 'CoT' stands for Chain-of-Thought Prompting (CoTP).\n- 'length_n': the story length, i.e. the number of chapters in a story. From 1 to 3.\n- 'sample_n': the numbering of different sample stories.\n- 'order_n': the ToM order of the question. From 0 to 4.", "### The 'Hi-ToM_prompt' folder\n\nContains prompt files that can be directly input to API.\nThe data in it are almost the same as 'Hi-ToM_data', except that answers are eliminated.", "### Generate new data and prompts\n\nRun the script 'generate_tomh.sh'." ]
[ 6, 58, 192, 52, 22 ]
[ "passage: TAGS\n#region-us \n# Hi-ToM Dataset\n\nThis is the dataset for the paper \"Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models\".\n\n<img src=media/URL height=430>### The 'Hi-ToM_data' folder\n\nContains ToMh data consisting of story-question pairs and the corresponding answers.\nThe names of subfolder branches have the following meanings:\n\n- 'Tell' / 'No_Tell': whether or not the stories contain communications among agents.\n- 'MC' / 'CoT': the prompting style. 'MC' corresponds to Vanilla Prompting (VP) in the paper, while 'CoT' stands for Chain-of-Thought Prompting (CoTP).\n- 'length_n': the story length, i.e. the number of chapters in a story. From 1 to 3.\n- 'sample_n': the numbering of different sample stories.\n- 'order_n': the ToM order of the question. From 0 to 4.### The 'Hi-ToM_prompt' folder\n\nContains prompt files that can be directly input to API.\nThe data in it are almost the same as 'Hi-ToM_data', except that answers are eliminated.### Generate new data and prompts\n\nRun the script 'generate_tomh.sh'." ]
f3ae5d8ac3aadf975b25439b7398543da945c4da
# Dataset Card for "go_emotions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/go_emotions
[ "region:us" ]
2023-10-28T15:02:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 210169067, "num_examples": 168980}, {"name": "test", "num_bytes": 52552436, "num_examples": 42245}], "download_size": 13348134, "dataset_size": 262721503}}
2023-10-28T15:02:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "go_emotions" More Information needed
[ "# Dataset Card for \"go_emotions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"go_emotions\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"go_emotions\"\n\nMore Information needed" ]
0f2efd1166e497cf2a5eb77b3091e8341425d6b5
# ChiMed-VL Dataset ## ChiMed-VL-Alignment dataset ## ChiMed-VL-Alignment consists of 580,014 image-text couplings, each pair falling into one of two categories: context information of an image or descriptions of an image. The context category contains 167M tokens, presenting a median text length of 435 (Q1: 211, Q3: 757). Conversely, descriptions, more concise and image-specific, contain inline descriptions and captions. They comprise 63M tokens, with median lengths settling at 59 (Q1: 45, Q3: 83). ## ChiMed-VL-Instruction dataset ## ChiMed-VL-Instruction comprises 469,441 question-answer pairs. Within this subset, the questions section contains 10M tokens with a median length of 20 (Q1: 16, Q3: 25), posing a concise inquiry reflective of medical queries. The answers consist of 13M tokens with a median length slightly longer at 22 (Q1: 12, Q3: 34), providing clear, direct, and informative responses.
williamliu/ChiMed-VL
[ "region:us" ]
2023-10-28T15:37:05+00:00
{}
2023-12-01T12:37:17+00:00
[]
[]
TAGS #region-us
# ChiMed-VL Dataset ## ChiMed-VL-Alignment dataset ## ChiMed-VL-Alignment consists of 580,014 image-text couplings, each pair falling into one of two categories: context information of an image or descriptions of an image. The context category contains 167M tokens, presenting a median text length of 435 (Q1: 211, Q3: 757). Conversely, descriptions, more concise and image-specific, contain inline descriptions and captions. They comprise 63M tokens, with median lengths settling at 59 (Q1: 45, Q3: 83). ## ChiMed-VL-Instruction dataset ## ChiMed-VL-Instruction comprises 469,441 question-answer pairs. Within this subset, the questions section contains 10M tokens with a median length of 20 (Q1: 16, Q3: 25), posing a concise inquiry reflective of medical queries. The answers consist of 13M tokens with a median length slightly longer at 22 (Q1: 12, Q3: 34), providing clear, direct, and informative responses.
[ "# ChiMed-VL Dataset", "## ChiMed-VL-Alignment dataset ##\n\nChiMed-VL-Alignment consists of 580,014 image-text couplings, each pair falling into one of two categories: context information of an image or descriptions of an image. The context category contains 167M tokens, presenting a median text length of 435 (Q1: 211, Q3: 757). Conversely, descriptions, more concise and image-specific, contain inline descriptions and captions. They comprise 63M tokens, with median lengths settling at 59 (Q1: 45, Q3: 83).", "## ChiMed-VL-Instruction dataset ##\n\nChiMed-VL-Instruction comprises 469,441 question-answer pairs. Within this subset, the questions section contains 10M tokens with a median length of 20 (Q1: 16, Q3: 25), posing a concise inquiry reflective of medical queries. The answers consist of 13M tokens with a median length slightly longer at 22 (Q1: 12, Q3: 34), providing clear, direct, and informative responses." ]
[ "TAGS\n#region-us \n", "# ChiMed-VL Dataset", "## ChiMed-VL-Alignment dataset ##\n\nChiMed-VL-Alignment consists of 580,014 image-text couplings, each pair falling into one of two categories: context information of an image or descriptions of an image. The context category contains 167M tokens, presenting a median text length of 435 (Q1: 211, Q3: 757). Conversely, descriptions, more concise and image-specific, contain inline descriptions and captions. They comprise 63M tokens, with median lengths settling at 59 (Q1: 45, Q3: 83).", "## ChiMed-VL-Instruction dataset ##\n\nChiMed-VL-Instruction comprises 469,441 question-answer pairs. Within this subset, the questions section contains 10M tokens with a median length of 20 (Q1: 16, Q3: 25), posing a concise inquiry reflective of medical queries. The answers consist of 13M tokens with a median length slightly longer at 22 (Q1: 12, Q3: 34), providing clear, direct, and informative responses." ]
[ 6, 7, 140, 119 ]
[ "passage: TAGS\n#region-us \n# ChiMed-VL Dataset## ChiMed-VL-Alignment dataset ##\n\nChiMed-VL-Alignment consists of 580,014 image-text couplings, each pair falling into one of two categories: context information of an image or descriptions of an image. The context category contains 167M tokens, presenting a median text length of 435 (Q1: 211, Q3: 757). Conversely, descriptions, more concise and image-specific, contain inline descriptions and captions. They comprise 63M tokens, with median lengths settling at 59 (Q1: 45, Q3: 83).## ChiMed-VL-Instruction dataset ##\n\nChiMed-VL-Instruction comprises 469,441 question-answer pairs. Within this subset, the questions section contains 10M tokens with a median length of 20 (Q1: 16, Q3: 25), posing a concise inquiry reflective of medical queries. The answers consist of 13M tokens with a median length slightly longer at 22 (Q1: 12, Q3: 34), providing clear, direct, and informative responses." ]
90a2b53a7d25362227d09b35309a0fe972107221
# Dataset Card for "hebrew-holy-DS-BenIishHay" ## Overview This dataset was created with [ויקיטקסט](https://he.wikisource.org/wiki/%D7%A2%D7%9E%D7%95%D7%93_%D7%A8%D7%90%D7%A9%D7%99). It contain all the Halaha's from the book 'Ben Ish Hai' in hebrew. ## Dataset Structure The dataset is structured with the following columns: - **Year:** Sign indicating the year of the text, representing a part of the book. - **Parasha:** Sign marking the parasha of the text, resembling an episode in the book. - **Number:** Sign marking the number of the text, often representing a part of an episode, typically consisting of 2 paragraphs. - **Text:** The actual text content from the book. Plase use it *only* to sanctify the name of G-d in the world! Thanks
AvishayDev/hebrew-BenIshHai
[ "task_categories:question-answering", "task_categories:text-generation", "language:he", "region:us" ]
2023-10-28T16:19:27+00:00
{"language": ["he"], "task_categories": ["question-answering", "text-generation"], "dataset_info": {"features": [{"name": "year", "dtype": "string"}, {"name": "parasha", "dtype": "string"}, {"name": "number", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2035374, "num_examples": 1992}], "download_size": 948349, "dataset_size": 2035374}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-13T20:18:31+00:00
[]
[ "he" ]
TAGS #task_categories-question-answering #task_categories-text-generation #language-Hebrew #region-us
# Dataset Card for "hebrew-holy-DS-BenIishHay" ## Overview This dataset was created with ויקיטקסט. It contain all the Halaha's from the book 'Ben Ish Hai' in hebrew. ## Dataset Structure The dataset is structured with the following columns: - Year: Sign indicating the year of the text, representing a part of the book. - Parasha: Sign marking the parasha of the text, resembling an episode in the book. - Number: Sign marking the number of the text, often representing a part of an episode, typically consisting of 2 paragraphs. - Text: The actual text content from the book. Plase use it *only* to sanctify the name of G-d in the world! Thanks
[ "# Dataset Card for \"hebrew-holy-DS-BenIishHay\"", "## Overview\n\nThis dataset was created with ויקיטקסט.\nIt contain all the Halaha's from the book 'Ben Ish Hai' in hebrew.", "## Dataset Structure\n\nThe dataset is structured with the following columns:\n\n- Year: Sign indicating the year of the text, representing a part of the book.\n- Parasha: Sign marking the parasha of the text, resembling an episode in the book.\n- Number: Sign marking the number of the text, often representing a part of an episode, typically consisting of 2 paragraphs.\n- Text: The actual text content from the book.\n\n\nPlase use it *only* to sanctify the name of G-d in the world! Thanks" ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-generation #language-Hebrew #region-us \n", "# Dataset Card for \"hebrew-holy-DS-BenIishHay\"", "## Overview\n\nThis dataset was created with ויקיטקסט.\nIt contain all the Halaha's from the book 'Ben Ish Hai' in hebrew.", "## Dataset Structure\n\nThe dataset is structured with the following columns:\n\n- Year: Sign indicating the year of the text, representing a part of the book.\n- Parasha: Sign marking the parasha of the text, resembling an episode in the book.\n- Number: Sign marking the number of the text, often representing a part of an episode, typically consisting of 2 paragraphs.\n- Text: The actual text content from the book.\n\n\nPlase use it *only* to sanctify the name of G-d in the world! Thanks" ]
[ 34, 19, 33, 124 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #language-Hebrew #region-us \n# Dataset Card for \"hebrew-holy-DS-BenIishHay\"## Overview\n\nThis dataset was created with ויקיטקסט.\nIt contain all the Halaha's from the book 'Ben Ish Hai' in hebrew.## Dataset Structure\n\nThe dataset is structured with the following columns:\n\n- Year: Sign indicating the year of the text, representing a part of the book.\n- Parasha: Sign marking the parasha of the text, resembling an episode in the book.\n- Number: Sign marking the number of the text, often representing a part of an episode, typically consisting of 2 paragraphs.\n- Text: The actual text content from the book.\n\n\nPlase use it *only* to sanctify the name of G-d in the world! Thanks" ]
4a113b7ebc414c78ac45fa8b672958c0e83cb7e5
# Dataset Card for "AurumnPegasus" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AurumnPegasus/AurumnPegasus
[ "region:us" ]
2023-10-28T16:29:23+00:00
{"dataset_info": {"features": [{"name": "context", "sequence": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 132102296, "num_examples": 2649}], "download_size": 26192269, "dataset_size": 132102296}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T16:43:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "AurumnPegasus" More Information needed
[ "# Dataset Card for \"AurumnPegasus\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"AurumnPegasus\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"AurumnPegasus\"\n\nMore Information needed" ]
d8201389969ebe960fb45c85c8e7d6f8758c036b
# Dataset Card for "eurlexsum_ita_cleaned_8192_86" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gianma/eurlexsum_ita_cleaned_8192_86
[ "region:us" ]
2023-10-28T17:15:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "reference", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "tokenized_len_total", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4297809, "num_examples": 233}, {"name": "validation", "num_bytes": 246276, "num_examples": 14}, {"name": "test", "num_bytes": 217013, "num_examples": 13}], "download_size": 2253956, "dataset_size": 4761098}}
2023-10-28T17:16:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "eurlexsum_ita_cleaned_8192_86" More Information needed
[ "# Dataset Card for \"eurlexsum_ita_cleaned_8192_86\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"eurlexsum_ita_cleaned_8192_86\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"eurlexsum_ita_cleaned_8192_86\"\n\nMore Information needed" ]
0dadc0d35ca6c11d4ad4b1023a773296608a9814
# Dataset Card for "dutch_social" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/dutch_social
[ "region:us" ]
2023-10-28T17:21:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 196538058, "num_examples": 162805}, {"name": "test", "num_bytes": 65499632, "num_examples": 54268}], "download_size": 24975837, "dataset_size": 262037690}}
2023-10-28T17:21:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dutch_social" More Information needed
[ "# Dataset Card for \"dutch_social\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dutch_social\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dutch_social\"\n\nMore Information needed" ]
ebdc14beb1c46c121ebcbbe4b7ca816ac505b0d9
# Dataset Card for "EnglishNLPDataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/EnglishNLPDataset
[ "region:us" ]
2023-10-28T17:27:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16432106, "num_examples": 80616}, {"name": "validation", "num_bytes": 2421791, "num_examples": 10000}, {"name": "test", "num_bytes": 2456653, "num_examples": 10000}], "download_size": 5458653, "dataset_size": 21310550}}
2023-10-28T17:27:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "EnglishNLPDataset" More Information needed
[ "# Dataset Card for \"EnglishNLPDataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"EnglishNLPDataset\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"EnglishNLPDataset\"\n\nMore Information needed" ]
d578933a6fe533cf9fec7375bbe8af24aeab97cd
# Dataset Card for "166c9db0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/166c9db0
[ "region:us" ]
2023-10-28T17:38:14+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 185, "num_examples": 10}], "download_size": 1392, "dataset_size": 185}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T17:38:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "166c9db0" More Information needed
[ "# Dataset Card for \"166c9db0\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"166c9db0\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"166c9db0\"\n\nMore Information needed" ]
ffb585f6dfc433c456f71f1eedd682567d8da4d0
# Dataset Card for "new_nlu_tts3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
quocanh34/new_nlu_tts3
[ "region:us" ]
2023-10-28T17:49:03+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "pred_str", "dtype": "string"}, {"name": "pred_str_norm", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "filler", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 568309188, "num_examples": 2139}], "download_size": 462242612, "dataset_size": 568309188}}
2023-10-28T17:50:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "new_nlu_tts3" More Information needed
[ "# Dataset Card for \"new_nlu_tts3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"new_nlu_tts3\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"new_nlu_tts3\"\n\nMore Information needed" ]
ae51db69d5f731d915725207c1591dfe8d0acfea
Formatted with a prompt template. Modified from this dataset https://huggingface.co/datasets/Nan-Do/reason_code-search-net-python
imessam/Python_code_assistant_with_prompt
[ "license:apache-2.0", "region:us" ]
2023-10-28T17:51:38+00:00
{"license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 650929658, "num_examples": 429059}], "download_size": 111027031, "dataset_size": 650929658}}
2023-10-28T19:07:49+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Formatted with a prompt template. Modified from this dataset URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
db49e1e5655f7ec48587839f62138745a7e43de7
# Dataset Card for "top_12_com_validacao" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ricardosantoss/top_12_com_validacao
[ "region:us" ]
2023-10-28T18:04:53+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "Nota Clinica", "dtype": "string"}, {"name": "Rotulos_1", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1059135, "num_examples": 1023}, {"name": "test", "num_bytes": 216746, "num_examples": 200}, {"name": "validation", "num_bytes": 224956, "num_examples": 200}], "download_size": 458849, "dataset_size": 1500837}}
2023-10-31T11:35:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "top_12_com_validacao" More Information needed
[ "# Dataset Card for \"top_12_com_validacao\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"top_12_com_validacao\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"top_12_com_validacao\"\n\nMore Information needed" ]
c612474887a18b9895f78996dd108ffb0458e1fb
# Dataset Card for "ecthr_cases_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/ecthr_cases
[ "region:us" ]
2023-10-28T18:17:26+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 97454695, "num_examples": 9000}, {"name": "test", "num_bytes": 12748596, "num_examples": 1000}, {"name": "validation", "num_bytes": 11852434, "num_examples": 1000}], "download_size": 52374911, "dataset_size": 122055725}}
2023-10-28T18:17:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ecthr_cases_new" More Information needed
[ "# Dataset Card for \"ecthr_cases_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ecthr_cases_new\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ecthr_cases_new\"\n\nMore Information needed" ]
1fbaab93798a38414e3f62e83dfa86bdb5e223be
# Dataset Card for "daily_dialog_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/daily_dialog
[ "region:us" ]
2023-10-28T18:25:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8497779, "num_examples": 11118}, {"name": "validation", "num_bytes": 777616, "num_examples": 1000}, {"name": "test", "num_bytes": 765768, "num_examples": 1000}], "download_size": 3969298, "dataset_size": 10041163}}
2023-10-28T18:25:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "daily_dialog_new" More Information needed
[ "# Dataset Card for \"daily_dialog_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"daily_dialog_new\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"daily_dialog_new\"\n\nMore Information needed" ]
6b1ab29fd2e6c9f7999ff7426830c28994a65a2e
Just a repost of the upstream with "" records elided
PsiPi/CodeAlpaca_20k_NoBlanks
[ "task_categories:text-generation", "size_categories:10K<n<100K", "license:cc-by-4.0", "code", "region:us" ]
2023-10-28T18:33:39+00:00
{"license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["code"]}
2023-10-29T06:06:30+00:00
[]
[]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #license-cc-by-4.0 #code #region-us
Just a repost of the upstream with "" records elided
[]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #license-cc-by-4.0 #code #region-us \n" ]
[ 40 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #license-cc-by-4.0 #code #region-us \n" ]
39d2f91be1d86b285776e538660702c84cecade4
# Dataset Card for "Text-SQL-Ethereum_tokentale" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tokentale1/Text-SQL-Ethereum_tokentale
[ "region:us" ]
2023-10-28T18:34:46+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404053473, "num_examples": 291757}], "download_size": 0, "dataset_size": 404053473}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T19:10:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Text-SQL-Ethereum_tokentale" More Information needed
[ "# Dataset Card for \"Text-SQL-Ethereum_tokentale\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Text-SQL-Ethereum_tokentale\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Text-SQL-Ethereum_tokentale\"\n\nMore Information needed" ]
4b6073ce586ac9223e9d74b473100c8eeab9c2f7
# Dataset Card for "xed_en_fi_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/xed_en_fi
[ "region:us" ]
2023-10-28T18:40:22+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5184988, "num_examples": 14022}, {"name": "test", "num_bytes": 1298121, "num_examples": 3506}], "download_size": 603616, "dataset_size": 6483109}}
2023-10-28T18:40:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "xed_en_fi_new" More Information needed
[ "# Dataset Card for \"xed_en_fi_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"xed_en_fi_new\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"xed_en_fi_new\"\n\nMore Information needed" ]
ec3ba46ba2fff13ef7a97b2f73784a33e3fad260
# Dataset Card for "sem_eval_2018_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/sem_eval_2018
[ "region:us" ]
2023-10-28T18:50:32+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3419309, "num_examples": 6838}, {"name": "test", "num_bytes": 1628220, "num_examples": 3259}, {"name": "validation", "num_bytes": 442769, "num_examples": 886}], "download_size": 907175, "dataset_size": 5490298}}
2023-10-28T18:50:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sem_eval_2018_new" More Information needed
[ "# Dataset Card for \"sem_eval_2018_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sem_eval_2018_new\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sem_eval_2018_new\"\n\nMore Information needed" ]
5a5bd2b877e1a9bfd0799da6c7e35478fee3b9d8
## OpenOrca-Ko-v2 1. NIV // 약 1500개 2. FLAN // 약 9000개 3. T0 // 약 6000개 4. CoT // 약 2000개 > Dataset 구성 - 수작업으로 고친 내용(v2) 1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) 2. KoCoT 데이터셋 제거. 3. Yes, True, False 등등 일부 답변 수정 > Post-processing 작업 내용 ## Translation Using DeepL Pro API. Thanks. --- >Below is original dataset card ## Table of Contents - [Dataset Summary](#dataset-summary) - [Dataset Attribution](#dataset-attribution) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Dataset Use](#dataset-use) - [Use Cases](#use-cases) - [Usage Caveats](#usage-caveats) - [Getting Started](#getting-started) <p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p> ![OpenOrca Logo](https://huggingface.co/datasets/Open-Orca/OpenOrca/resolve/main/OpenOrcaLogo.png "OpenOrca Logo") <a name="dataset-announcement"></a> We are thrilled to announce the release of the OpenOrca dataset! This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707). It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers! # Official Models ## OpenOrca-Platypus2-13B Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard! Released in partnership with Platypus. ## LlongOrca 7B & 13B * Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance. * [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance. ## OpenOrcaxOpenChat-Preview2-13B Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper. Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B. Released in partnership with OpenChat. ## OpenOrca-Preview1-13B [OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B) This model was trained in less than a day, for <$200, with <10% of our data. At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper. <a name="dataset-summary"></a> # Dataset Summary The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688). Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope. The data is primarily used for training and evaluation in the field of natural language processing. <a name="dataset-attribution"></a> # Dataset Attribution We would like to give special recognition to the following contributors for their significant efforts and dedication: Teknium WingLian/Caseus Eric Hartford NanoBit Pankaj Winddude Rohan http://AlignmentLab.ai: Autometa Entropi AtlasUnified NeverendingToast NanoBit WingLian/Caseus Also of course, as always, TheBloke, for being the backbone of the whole community. Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials: http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) <a name="supported-tasks-and-leaderboards"></a> # Supported Tasks and Leaderboards This dataset supports a range of tasks including language modeling, text generation, and text augmentation. It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing. Further information on leaderboards will be updated as they become available. <a name="languages"></a> # Languages The language of the data is primarily English. <a name="dataset-structure"></a> # Dataset Structure <a name="data-instances"></a> ## Data Instances A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5. The response is then entered into the response field. <a name="data-fields"></a> ## Data Fields The fields are: 1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from. 2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint 3) 'question', representing a question entry as provided by the FLAN Collection 4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4. <a name="data-splits"></a> ## Data Splits The data is unsplit. <a name="dataset-creation"></a> # Dataset Creation <a name="curation-rationale"></a> ## Curation Rationale The dataset was created to provide a source of augmented text data for researchers and developers. The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4. This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on. <a name="source-data"></a> ## Source Data The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below: 1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use. We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available. 2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original). These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source. However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively. Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work. <a name="dataset-use"></a> # Dataset Use <a name="use-cases"></a> ## Use Cases The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation. <a name="usage-caveats"></a> ## Usage Caveats Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements. Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper. <a name="getting-started"></a> ## Getting Started This dataset is organized such that it can be naively loaded via Hugging Face datasets library. We recommend using streaming due to the large size of the files. Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face. # Citation ```bibtex @misc{OpenOrca, title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces}, author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca}, } ``` ```bibtex @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint= arXiv 2307.09288 } @software{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
kyujinpy/OpenOrca-ko-v2
[ "license:cc-by-nc-4.0", "arxiv:2306.02707", "arxiv:2301.13688", "region:us" ]
2023-10-28T18:52:37+00:00
{"license": "cc-by-nc-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41592589, "num_examples": 19468}], "download_size": 21611641, "dataset_size": 41592589}}
2023-10-28T18:58:34+00:00
[ "2306.02707", "2301.13688" ]
[]
TAGS #license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us
## OpenOrca-Ko-v2 1. NIV // 약 1500개 2. FLAN // 약 9000개 3. T0 // 약 6000개 4. CoT // 약 2000개 > Dataset 구성 - 수작업으로 고친 내용(v2) 1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) 2. KoCoT 데이터셋 제거. 3. Yes, True, False 등등 일부 답변 수정 > Post-processing 작업 내용 ## Translation Using DeepL Pro API. Thanks. --- >Below is original dataset card ## Table of Contents - Dataset Summary - Dataset Attribution - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Dataset Use - Use Cases - Usage Caveats - Getting Started <p><h1> The OpenOrca Dataset! </h1></p> !OpenOrca Logo <a name="dataset-announcement"></a> We are thrilled to announce the release of the OpenOrca dataset! This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper. It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers! # Official Models ## OpenOrca-Platypus2-13B Our latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard! Released in partnership with Platypus. ## LlongOrca 7B & 13B * Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance. * LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance. ## OpenOrcaxOpenChat-Preview2-13B Our second model, highlighting that we've surpassed the performance reported in the Orca paper. Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B. Released in partnership with OpenChat. ## OpenOrca-Preview1-13B OpenOrca-Preview1-13B This model was trained in less than a day, for <$200, with <10% of our data. At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper. <a name="dataset-summary"></a> # Dataset Summary The OpenOrca dataset is a collection of augmented FLAN Collection data. Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope. The data is primarily used for training and evaluation in the field of natural language processing. <a name="dataset-attribution"></a> # Dataset Attribution We would like to give special recognition to the following contributors for their significant efforts and dedication: Teknium WingLian/Caseus Eric Hartford NanoBit Pankaj Winddude Rohan URL: Autometa Entropi AtlasUnified NeverendingToast NanoBit WingLian/Caseus Also of course, as always, TheBloke, for being the backbone of the whole community. Many thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials: URL URL Want to visualize our full dataset? Check out our Nomic Atlas Map. <img src="URL alt="Atlas Nomic Dataset Map" width="400" height="400" /> <a name="supported-tasks-and-leaderboards"></a> # Supported Tasks and Leaderboards This dataset supports a range of tasks including language modeling, text generation, and text augmentation. It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing. Further information on leaderboards will be updated as they become available. <a name="languages"></a> # Languages The language of the data is primarily English. <a name="dataset-structure"></a> # Dataset Structure <a name="data-instances"></a> ## Data Instances A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5. The response is then entered into the response field. <a name="data-fields"></a> ## Data Fields The fields are: 1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from. 2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint 3) 'question', representing a question entry as provided by the FLAN Collection 4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4. <a name="data-splits"></a> ## Data Splits The data is unsplit. <a name="dataset-creation"></a> # Dataset Creation <a name="curation-rationale"></a> ## Curation Rationale The dataset was created to provide a source of augmented text data for researchers and developers. The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4. This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on. <a name="source-data"></a> ## Source Data The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below: 1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use. We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available. 2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021. These are referenced by the official FLAN Collection repo as the preferred data source. However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively. Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work. <a name="dataset-use"></a> # Dataset Use <a name="use-cases"></a> ## Use Cases The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation. <a name="usage-caveats"></a> ## Usage Caveats Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements. Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper. <a name="getting-started"></a> ## Getting Started This dataset is organized such that it can be naively loaded via Hugging Face datasets library. We recommend using streaming due to the large size of the files. Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
[ "## OpenOrca-Ko-v2 \n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성 \n \n- 수작업으로 고친 내용(v2) \n1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) \n2. KoCoT 데이터셋 제거. \n3. Yes, True, False 등등 일부 답변 수정 \n> Post-processing 작업 내용", "## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card", "## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!", "# Official Models", "## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.", "## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.", "## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.", "## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>", "# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>", "# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>", "# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>", "# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>", "# Dataset Structure\n\n<a name=\"data-instances\"></a>", "## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>", "## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>", "## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>", "# Dataset Creation\n\n<a name=\"curation-rationale\"></a>", "## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>", "## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>", "# Dataset Use\n\n<a name=\"use-cases\"></a>", "## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>", "## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>", "## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face." ]
[ "TAGS\n#license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n", "## OpenOrca-Ko-v2 \n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성 \n \n- 수작업으로 고친 내용(v2) \n1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) \n2. KoCoT 데이터셋 제거. \n3. Yes, True, False 등등 일부 답변 수정 \n> Post-processing 작업 내용", "## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card", "## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!", "# Official Models", "## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.", "## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.", "## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.", "## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>", "# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>", "# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>", "# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>", "# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>", "# Dataset Structure\n\n<a name=\"data-instances\"></a>", "## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>", "## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>", "## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>", "# Dataset Creation\n\n<a name=\"curation-rationale\"></a>", "## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>", "## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>", "# Dataset Use\n\n<a name=\"use-cases\"></a>", "## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>", "## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>", "## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face." ]
[ 34, 104, 20, 199, 4, 48, 98, 67, 95, 122, 233, 86, 25, 19, 67, 153, 24, 18, 146, 235, 16, 46, 70, 66 ]
[ "passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n## OpenOrca-Ko-v2 \n1. NIV // 약 1500개\n2. FLAN // 약 9000개\n3. T0 // 약 6000개\n4. CoT // 약 2000개\n> Dataset 구성 \n \n- 수작업으로 고친 내용(v2) \n1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...) \n2. KoCoT 데이터셋 제거. \n3. Yes, True, False 등등 일부 답변 수정 \n> Post-processing 작업 내용## Translation\nUsing DeepL Pro API. Thanks.\n\n---\n>Below is original dataset card## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!# Official Models## OpenOrca-Platypus2-13B\n\nOur latest release, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.", "passage: ## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a># Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a># Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>", "passage: # Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a># Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a># Dataset Structure\n\n<a name=\"data-instances\"></a>## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a># Dataset Creation\n\n<a name=\"curation-rationale\"></a>## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>" ]
746f70b0a68f635b07821d74f8eaf2de4a938580
# Dataset Card for "metooma_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/metooma
[ "region:us" ]
2023-10-28T18:54:14+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "TweetId", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2991750, "num_examples": 7978}, {"name": "test", "num_bytes": 748125, "num_examples": 1995}], "download_size": 195958, "dataset_size": 3739875}}
2023-10-28T18:54:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "metooma_new" More Information needed
[ "# Dataset Card for \"metooma_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"metooma_new\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"metooma_new\"\n\nMore Information needed" ]
49226f073283ba2003e9c5597529e798ace69fff
# Dataset Card for "clmet_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/clmet
[ "region:us" ]
2023-10-28T18:58:13+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 149061943, "num_examples": 266}, {"name": "test", "num_bytes": 50034891, "num_examples": 67}], "download_size": 117110210, "dataset_size": 199096834}}
2023-10-28T18:58:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "clmet_new" More Information needed
[ "# Dataset Card for \"clmet_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"clmet_new\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"clmet_new\"\n\nMore Information needed" ]
4cd556e9a4e494d016d153a04526080d79617f30
# AutoTrain Dataset for project: hstv-cc-help_v01 ## Dataset Description This dataset has been automatically processed by AutoTrain for project hstv-cc-help_v01. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "Product Name", "feat_\u200b\u200bHuggimalz\u200b Unicorn Soft Plush Toy": null, "target": 4, "feat_\u00a329.99": null, "feat_What products do you offer?": null, "feat_We offer a wide range of products including the Power XL Vortex PRO - 4L Digital Air Fryer, Drew&Cole Adoro Pizza Oven, Nutribullet Smart Touch Blender Combo, SmartAir BOOST Radiator Fan, and many more.": null, "feat_Ollyball \u2013 The Ultimate Indoor Play Ball": "Nutribullet 600 Series Starter Kit", "feat_Now you can play ball in the house - Hit it, kick it, colour it in Ollyball is perfect for full-speed indoors without breaking windows or leaving a nasty bruise The 30cm super lightweight inflatable ball, with special KrunchCOR construction, absorbs the impact from full-speed hits and kicks.": null, "feat_SAVE \u00a310": null, "feat_As low as \u00a317.99": "\u00a359.99", "feat_https://www.highstreettv.com/media/catalog/product/cache/f158af82292ec3d0638e111a17ec7f2d/o/l/ollyball_web_images_cd333_72dpi_02_3.jpg": null, "feat_Happy Nappers - Disco Dolphin - Medium (ages 3 to 6)": null, "feat_5.0 Stars-Reviews 2 ": null }, { "text": "Product Name", "feat_\u200b\u200bHuggimalz\u200b Unicorn Soft Plush Toy": "Like New - Nutribullet 1200 Series", "target": 1, "feat_\u00a329.99": "\u00a3119.99", "feat_What products do you offer?": null, "feat_We offer a wide range of products including the Power XL Vortex PRO - 4L Digital Air Fryer, Drew&Cole Adoro Pizza Oven, Nutribullet Smart Touch Blender Combo, SmartAir BOOST Radiator Fan, and many more.": null, "feat_Ollyball \u2013 The Ultimate Indoor Play Ball": null, "feat_Now you can play ball in the house - Hit it, kick it, colour it in Ollyball is perfect for full-speed indoors without breaking windows or leaving a nasty bruise The 30cm super lightweight inflatable ball, with special KrunchCOR construction, absorbs the impact from full-speed hits and kicks.": null, "feat_SAVE \u00a310": null, "feat_As low as \u00a317.99": null, "feat_https://www.highstreettv.com/media/catalog/product/cache/f158af82292ec3d0638e111a17ec7f2d/o/l/ollyball_web_images_cd333_72dpi_02_3.jpg": null, "feat_Happy Nappers - Disco Dolphin - Medium (ages 3 to 6)": null, "feat_5.0 Stars-Reviews 2 ": null } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "feat_\u200b\u200bHuggimalz\u200b Unicorn Soft Plush Toy": "Value(dtype='string', id=None)", "target": "ClassLabel(names=[' Stars-Reviews', 'Before Price', 'Description', 'Discount', 'Final Price', 'Product Photo', 'Response:'], id=None)", "feat_\u00a329.99": "Value(dtype='string', id=None)", "feat_What products do you offer?": "Value(dtype='string', id=None)", "feat_We offer a wide range of products including the Power XL Vortex PRO - 4L Digital Air Fryer, Drew&Cole Adoro Pizza Oven, Nutribullet Smart Touch Blender Combo, SmartAir BOOST Radiator Fan, and many more.": "Value(dtype='string', id=None)", "feat_Ollyball \u2013 The Ultimate Indoor Play Ball": "Value(dtype='string', id=None)", "feat_Now you can play ball in the house - Hit it, kick it, colour it in Ollyball is perfect for full-speed indoors without breaking windows or leaving a nasty bruise The 30cm super lightweight inflatable ball, with special KrunchCOR construction, absorbs the impact from full-speed hits and kicks.": "Value(dtype='string', id=None)", "feat_SAVE \u00a310": "Value(dtype='string', id=None)", "feat_As low as \u00a317.99": "Value(dtype='string', id=None)", "feat_https://www.highstreettv.com/media/catalog/product/cache/f158af82292ec3d0638e111a17ec7f2d/o/l/ollyball_web_images_cd333_72dpi_02_3.jpg": "Value(dtype='string', id=None)", "feat_Happy Nappers - Disco Dolphin - Medium (ages 3 to 6)": "Value(dtype='string', id=None)", "feat_5.0 Stars-Reviews 2 ": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2786 | | valid | 699 |
trip2fun/autotrain-data-hstv-cc-help_v01
[ "task_categories:text-classification", "language:en", "region:us" ]
2023-10-28T18:59:13+00:00
{"language": ["en"], "task_categories": ["text-classification"]}
2023-10-28T19:44:48+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #region-us
AutoTrain Dataset for project: hstv-cc-help\_v01 ================================================ Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project hstv-cc-help\_v01. ### Languages The BCP-47 code for the dataset's language is en. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #language-English #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ 21, 26, 17, 23, 27 ]
[ "passage: TAGS\n#task_categories-text-classification #language-English #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
6bf6cb6ddc94b8b76239b3e620226ba81ef71cb4
# Dataset Card for "ethos_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akkasi/ethos
[ "region:us" ]
2023-10-28T18:59:50+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "float64"}, {"name": "label2idx", "dtype": "string"}, {"name": "idx2label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 165667, "num_examples": 346}, {"name": "test", "num_bytes": 46805, "num_examples": 87}], "download_size": 46734, "dataset_size": 212472}}
2023-10-28T18:59:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ethos_new" More Information needed
[ "# Dataset Card for \"ethos_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ethos_new\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ethos_new\"\n\nMore Information needed" ]
c6257ce61ec0096d6e722433802efd27583e89a1
Proofsteps data from Proof-Pile-2 includes proofsteps for Lean and Isabelle ```python from datasets import load_dataset ds = load_dataset( "xu3kev/proof-pile-2-proofsteps" ) ds DatasetDict({ lean_proofsteps: Dataset({ features: ['text', 'meta'], num_rows: 3432 }) isa_proofsteps: Dataset({ features: ['text', 'meta'], num_rows: 260726 }) }) ``` Quoting from appendix of [LLEMMA: AN OPEN LANGUAGE MODEL FOR MATHEMATICS](https://arxiv.org/pdf/2310.10631.pdf) ``` B.1.2 LEAN PROOFSTEPS We extract a dataset of (tactic state, next tactic) pairs from Mathlib 4 (mathlib Community, 2020) using the lean-training-data (Morrison, 2023) tool. We use Mathlib 4 commit c779bd5, which was created on August 20th 2023. B.1.3 ISABELLE PROOFSTEPS We construct a dataset of Isabelle proofs, building upon the PISA dataset Jiang et al. (2021). Isabelle Proofsteps comprises proofs from the Archive of Formal Proofs and Isabelle Standard Library, scraped with PISA Jiang et al. (2021). Each entry in the dataset includes the theorem statement, the proof states and the proof steps, separated by specific tags. To maintain the integrity of evaluations using the PISA test set, we decontaminate Isabelle Proofsteps by removing theorems whose names overlap with those in the PISA test set. Although this approach results in a strict filtering – removing more than 10,000 theorems although there are only 3600 in the PISA test set – we consider it acceptable in order to mitigate data contamination. After filtering, Isabelle Proofsteps contains 251,000 theorems. ```
xu3kev/proof-pile-2-proofsteps
[ "arxiv:2310.10631", "region:us" ]
2023-10-28T19:03:34+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "lean_proofsteps", "path": "lean_proofsteps/*.parquet"}, {"split": "isa_proofsteps", "path": "isa_proofsteps/*.parquet"}]}]}
2023-10-28T20:45:17+00:00
[ "2310.10631" ]
[]
TAGS #arxiv-2310.10631 #region-us
Proofsteps data from Proof-Pile-2 includes proofsteps for Lean and Isabelle Quoting from appendix of LLEMMA: AN OPEN LANGUAGE MODEL FOR MATHEMATICS
[]
[ "TAGS\n#arxiv-2310.10631 #region-us \n" ]
[ 15 ]
[ "passage: TAGS\n#arxiv-2310.10631 #region-us \n" ]
d58e645970ef28426f2ac25577772c0bd716aa0f
# Dataset Card for "new_nlu_tts3_with_correction" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
quocanh34/new_nlu_tts3_with_correction
[ "region:us" ]
2023-10-28T19:24:35+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "pred_str", "dtype": "string"}, {"name": "pred_str_norm", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "filler", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 568308476, "num_examples": 2139}], "download_size": 462240620, "dataset_size": 568308476}}
2023-10-28T19:26:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "new_nlu_tts3_with_correction" More Information needed
[ "# Dataset Card for \"new_nlu_tts3_with_correction\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"new_nlu_tts3_with_correction\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"new_nlu_tts3_with_correction\"\n\nMore Information needed" ]
ebe1429f9d2165225ec5bbfc4ac6d0730b4e893e
# Dataset Card for "little_face64x64" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
22Plaruno/little_face64x64
[ "region:us" ]
2023-10-28T19:33:02+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 151477080.0, "num_examples": 70000}], "download_size": 161591941, "dataset_size": 151477080.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T19:33:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "little_face64x64" More Information needed
[ "# Dataset Card for \"little_face64x64\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"little_face64x64\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"little_face64x64\"\n\nMore Information needed" ]
300502224a168ec937b7ec9484536e03d09b9096
# Dataset Card for Evaluation run of HuggingFaceH4/starchat-alpha ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/HuggingFaceH4/starchat-alpha - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [HuggingFaceH4/starchat-alpha](https://huggingface.co/HuggingFaceH4/starchat-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_HuggingFaceH4__starchat-alpha", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-28T20:45:27.557635](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__starchat-alpha/blob/main/results_2023-10-28T20-45-27.557635.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.003460570469798658, "em_stderr": 0.0006013962884271187, "f1": 0.07069001677852364, "f1_stderr": 0.0015946422775582956, "acc": 0.28758422975957604, "acc_stderr": 0.009108733644571125 }, "harness|drop|3": { "em": 0.003460570469798658, "em_stderr": 0.0006013962884271187, "f1": 0.07069001677852364, "f1_stderr": 0.0015946422775582956 }, "harness|gsm8k|5": { "acc": 0.024260803639120546, "acc_stderr": 0.004238007900001399 }, "harness|winogrande|5": { "acc": 0.5509076558800315, "acc_stderr": 0.01397945938914085 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_HuggingFaceH4__starchat-alpha
[ "region:us" ]
2023-10-28T19:45:31+00:00
{"pretty_name": "Evaluation run of HuggingFaceH4/starchat-alpha", "dataset_summary": "Dataset automatically created during the evaluation run of model [HuggingFaceH4/starchat-alpha](https://huggingface.co/HuggingFaceH4/starchat-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_HuggingFaceH4__starchat-alpha\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T20:45:27.557635](https://huggingface.co/datasets/open-llm-leaderboard/details_HuggingFaceH4__starchat-alpha/blob/main/results_2023-10-28T20-45-27.557635.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003460570469798658,\n \"em_stderr\": 0.0006013962884271187,\n \"f1\": 0.07069001677852364,\n \"f1_stderr\": 0.0015946422775582956,\n \"acc\": 0.28758422975957604,\n \"acc_stderr\": 0.009108733644571125\n },\n \"harness|drop|3\": {\n \"em\": 0.003460570469798658,\n \"em_stderr\": 0.0006013962884271187,\n \"f1\": 0.07069001677852364,\n \"f1_stderr\": 0.0015946422775582956\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.024260803639120546,\n \"acc_stderr\": 0.004238007900001399\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5509076558800315,\n \"acc_stderr\": 0.01397945938914085\n }\n}\n```", "repo_url": "https://huggingface.co/HuggingFaceH4/starchat-alpha", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T20_45_27.557635", "path": ["**/details_harness|drop|3_2023-10-28T20-45-27.557635.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T20-45-27.557635.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T20_45_27.557635", "path": ["**/details_harness|gsm8k|5_2023-10-28T20-45-27.557635.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T20-45-27.557635.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T20_45_27.557635", "path": ["**/details_harness|winogrande|5_2023-10-28T20-45-27.557635.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T20-45-27.557635.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_28T20_45_27.557635", "path": ["results_2023-10-28T20-45-27.557635.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T20-45-27.557635.parquet"]}]}]}
2023-10-28T19:45:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of HuggingFaceH4/starchat-alpha ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model HuggingFaceH4/starchat-alpha on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-28T20:45:27.557635(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of HuggingFaceH4/starchat-alpha", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model HuggingFaceH4/starchat-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T20:45:27.557635(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of HuggingFaceH4/starchat-alpha", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model HuggingFaceH4/starchat-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T20:45:27.557635(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 20, 31, 168, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of HuggingFaceH4/starchat-alpha## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model HuggingFaceH4/starchat-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T20:45:27.557635(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
82c1498e1c9a8d356c32ff9830cbbe2d351e2940
# Dataset Card for "dolly-llama2-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AnanyaAJ/dolly-llama2-1k
[ "region:us" ]
2023-10-28T19:50:32+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1734805, "num_examples": 1000}], "download_size": 1056790, "dataset_size": 1734805}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T19:50:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dolly-llama2-1k" More Information needed
[ "# Dataset Card for \"dolly-llama2-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dolly-llama2-1k\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dolly-llama2-1k\"\n\nMore Information needed" ]
595789d3bc9620948d4e524c1031f0b4ff245525
# Dataset Card for "Tapal-66-TEXT" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alihuzezy/Tapal-66-TEXT
[ "region:us" ]
2023-10-28T19:51:40+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12266, "num_examples": 58}], "download_size": 7905, "dataset_size": 12266}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T19:57:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Tapal-66-TEXT" More Information needed
[ "# Dataset Card for \"Tapal-66-TEXT\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Tapal-66-TEXT\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Tapal-66-TEXT\"\n\nMore Information needed" ]
0860f1330487e8c1fc876eaf452a40e81d1b3474
# Dataset Card for "MFA_tweet_topics" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Bsbell21/MFA_tweet_topics
[ "region:us" ]
2023-10-28T19:52:48+00:00
{"dataset_info": {"features": [{"name": "tweet", "dtype": "string"}, {"name": "topics", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21732, "num_examples": 121}], "download_size": 18513, "dataset_size": 21732}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T19:52:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "MFA_tweet_topics" More Information needed
[ "# Dataset Card for \"MFA_tweet_topics\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"MFA_tweet_topics\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"MFA_tweet_topics\"\n\nMore Information needed" ]
f560de0450ae18d48911dccd14db032ad9760f71
# Dataset Card for "877f2204" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/877f2204
[ "region:us" ]
2023-10-28T20:09:40+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 185, "num_examples": 10}], "download_size": 1372, "dataset_size": 185}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T20:09:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "877f2204" More Information needed
[ "# Dataset Card for \"877f2204\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"877f2204\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"877f2204\"\n\nMore Information needed" ]
1323e7212446bd8ba37d667c839b8379f73392e4
# Dataset Card for "hf_objdet_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
salma-remyx/hf_objdet_test
[ "region:us" ]
2023-10-28T20:10:22+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "categories", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 7545187.0, "num_examples": 16}], "download_size": 7548342, "dataset_size": 7545187.0}}
2023-10-29T23:11:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hf_objdet_test" More Information needed
[ "# Dataset Card for \"hf_objdet_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hf_objdet_test\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hf_objdet_test\"\n\nMore Information needed" ]
5d76449d13a9266ba4540f8024e7b363404f89e1
# Dataset Card for cxllin/economics This dataset aims to represent knowledge within the realm of economics ## Dataset Details Featuring Macro, Micro, and Math texbooks ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
cxllin/economics
[ "region:us" ]
2023-10-28T20:49:43+00:00
{}
2023-10-28T21:27:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for cxllin/economics This dataset aims to represent knowledge within the realm of economics ## Dataset Details Featuring Macro, Micro, and Math texbooks ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for cxllin/economics\n\nThis dataset aims to represent knowledge within the realm of economics", "## Dataset Details\n\nFeaturing Macro, Micro, and Math texbooks", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for cxllin/economics\n\nThis dataset aims to represent knowledge within the realm of economics", "## Dataset Details\n\nFeaturing Macro, Micro, and Math texbooks", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 25, 17, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for cxllin/economics\n\nThis dataset aims to represent knowledge within the realm of economics## Dataset Details\n\nFeaturing Macro, Micro, and Math texbooks### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
e0247e9b492c9c89bc154e2be29f58332e217509
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
umangapatel123/mashq
[ "region:us" ]
2023-10-28T21:11:40+00:00
{}
2023-10-28T21:50:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
dc9935dd4cd15594300586668b100ee2f5ad7d87
# Dataset Card for "acl-arc-revised" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ialvarenga/acl-arc-revised
[ "region:us" ]
2023-10-28T21:40:16+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "intent", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5"}}}}], "splits": [{"name": "train", "num_bytes": 358284.1064718163, "num_examples": 1532}, {"name": "test", "num_bytes": 44902.44676409186, "num_examples": 192}, {"name": "eval", "num_bytes": 44902.44676409186, "num_examples": 192}], "download_size": 231094, "dataset_size": 448089.0}}
2023-10-28T21:40:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "acl-arc-revised" More Information needed
[ "# Dataset Card for \"acl-arc-revised\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"acl-arc-revised\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"acl-arc-revised\"\n\nMore Information needed" ]
f270ec1509c6f35717d7b109ffc7a6b8929187ca
# Dataset Card for "wiki_20220301_en_nltk_uncased_phrases_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kinianlo/wiki_20220301_en_nltk_uncased_phrases_clean
[ "region:us" ]
2023-10-28T21:42:16+00:00
{"dataset_info": {"features": [{"name": "phrase_id", "dtype": "uint32"}, {"name": "adj_id", "dtype": "uint32"}, {"name": "noun_id", "dtype": "uint32"}, {"name": "count", "dtype": "uint64"}], "splits": [{"name": "train", "num_bytes": 67986800, "num_examples": 3399340}], "download_size": 41983842, "dataset_size": 67986800}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-28T21:42:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wiki_20220301_en_nltk_uncased_phrases_clean" More Information needed
[ "# Dataset Card for \"wiki_20220301_en_nltk_uncased_phrases_clean\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wiki_20220301_en_nltk_uncased_phrases_clean\"\n\nMore Information needed" ]
[ 6, 31 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wiki_20220301_en_nltk_uncased_phrases_clean\"\n\nMore Information needed" ]
594ad4e1170e9640cb58e1e8b57bade02785864c
This dataset provides the complete ExpertMedQA dataset along with responses generated by BooksMed, highlighting the dataset's diversity and complexity, and providing a comprehensive overview of dataset questions. ExpertMedQA is a novel benchmark characterized by open-ended, expert-level clinical questions, which bridge this gap by requiring not only an understanding of the most recent clinical literature but also an analysis of the strength of the evidence presented. From current treatment guidelines to open-ended discussions requiring knowledge and analysis based on current clinical research studies, this dataset covers a wide range of topics.
satwant/ExpertMedQA
[ "license:cc-by-nc-4.0", "region:us" ]
2023-10-28T21:43:33+00:00
{"license": "cc-by-nc-4.0"}
2023-10-28T21:44:57+00:00
[]
[]
TAGS #license-cc-by-nc-4.0 #region-us
This dataset provides the complete ExpertMedQA dataset along with responses generated by BooksMed, highlighting the dataset's diversity and complexity, and providing a comprehensive overview of dataset questions. ExpertMedQA is a novel benchmark characterized by open-ended, expert-level clinical questions, which bridge this gap by requiring not only an understanding of the most recent clinical literature but also an analysis of the strength of the evidence presented. From current treatment guidelines to open-ended discussions requiring knowledge and analysis based on current clinical research studies, this dataset covers a wide range of topics.
[]
[ "TAGS\n#license-cc-by-nc-4.0 #region-us \n" ]
[ 17 ]
[ "passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n" ]
488af01ae09608c531538d9ce31a053f6d625a61
# Dataset Card for "apples-dataset-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
josedonoso/apples-dataset-v1
[ "region:us" ]
2023-10-28T22:35:50+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2704421.0, "num_examples": 192}, {"name": "test", "num_bytes": 646648.0, "num_examples": 48}], "download_size": 3236890, "dataset_size": 3351069.0}}
2023-10-28T22:35:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "apples-dataset-v1" More Information needed
[ "# Dataset Card for \"apples-dataset-v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"apples-dataset-v1\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"apples-dataset-v1\"\n\nMore Information needed" ]
8ee9f70f23825ac09e59b0e1201c98e1cbaf29c9
# Dataset Card for "old_nlu_new_asr_v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
quocanh34/old_nlu_new_asr_v1
[ "region:us" ]
2023-10-28T22:38:40+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "pred_str", "dtype": "string"}, {"name": "pred_str_norm", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "filler", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 568314186, "num_examples": 2139}], "download_size": 462244745, "dataset_size": 568314186}}
2023-10-28T22:39:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "old_nlu_new_asr_v1" More Information needed
[ "# Dataset Card for \"old_nlu_new_asr_v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"old_nlu_new_asr_v1\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"old_nlu_new_asr_v1\"\n\nMore Information needed" ]
4c94a602b2e7cae3885c2f22440a3fdd1f0231bb
# Dataset Card for "cve_train_main" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
venkat-srinivasan-nexusflow/cve_train_prompt_change_only
[ "region:us" ]
2023-10-29T02:43:34+00:00
{"dataset_info": {"features": [{"name": "Input", "dtype": "string"}, {"name": "Output", "dtype": "string"}, {"name": "Cot", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 396691, "num_examples": 302}], "download_size": 119758, "dataset_size": 396691}}
2023-10-29T04:22:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cve_train_main" More Information needed
[ "# Dataset Card for \"cve_train_main\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cve_train_main\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cve_train_main\"\n\nMore Information needed" ]
40a88f6af0b879c564f940d433ee56dabca45c2e
# Dataset Card for "chest-xray" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
theophilusijiebor1/chest-xray
[ "region:us" ]
2023-10-29T03:07:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NORMAL", "1": "PNEUMONIA"}}}}], "splits": [{"name": "train", "num_bytes": 3186635036.504, "num_examples": 5216}, {"name": "validation", "num_bytes": 3030633.0, "num_examples": 16}, {"name": "test", "num_bytes": 79062317.0, "num_examples": 624}], "download_size": 1230487171, "dataset_size": 3268727986.504}}
2023-10-29T03:08:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "chest-xray" More Information needed
[ "# Dataset Card for \"chest-xray\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"chest-xray\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"chest-xray\"\n\nMore Information needed" ]
5b5fa11c80464bca501710c6cf39758982c4b805
[FEDデータセット](http://shikib.com/fed_data.json)をGoogle Cloud Translate API v2で日本語化したデータセットです. 機械翻訳のため,一部dimensionはアノテーションとの整合性が適切ではない可能性があります. 使用するdimensionには注意してください.
yubo0306/fed_ja
[ "task_categories:conversational", "language:ja", "license:unknown", "region:us" ]
2023-10-29T03:55:00+00:00
{"language": ["ja"], "license": "unknown", "task_categories": ["conversational"], "pretty_name": "fed_ja", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "fed_data.json"}]}]}
2023-10-29T04:26:57+00:00
[]
[ "ja" ]
TAGS #task_categories-conversational #language-Japanese #license-unknown #region-us
FEDデータセットをGoogle Cloud Translate API v2で日本語化したデータセットです. 機械翻訳のため,一部dimensionはアノテーションとの整合性が適切ではない可能性があります. 使用するdimensionには注意してください.
[]
[ "TAGS\n#task_categories-conversational #language-Japanese #license-unknown #region-us \n" ]
[ 29 ]
[ "passage: TAGS\n#task_categories-conversational #language-Japanese #license-unknown #region-us \n" ]
c1dd321866cb427fca66b629bda0c440cdedb0c4
100 Pascal Q and A 60% with an input string of some kind
PsiPi/PascalQnA100
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:cc-by-4.0", "code", "region:us" ]
2023-10-29T04:04:23+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "pascal100", "tags": ["code"]}
2023-10-29T05:52:25+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-4.0 #code #region-us
100 Pascal Q and A 60% with an input string of some kind
[]
[ "TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-4.0 #code #region-us \n" ]
[ 42 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-4.0 #code #region-us \n" ]
0dfc5049770a124e4cd6f56c0e7205330235b480
# Dataset Card for "ig_rewarding_db_v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
toilaluan/ig_rewarding_db_v3
[ "region:us" ]
2023-10-29T04:04:43+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "topic", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "request_id", "dtype": "int64"}, {"name": "model_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 402950672.8, "num_examples": 9600}], "download_size": 577053325, "dataset_size": 402950672.8}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T04:05:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ig_rewarding_db_v3" More Information needed
[ "# Dataset Card for \"ig_rewarding_db_v3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ig_rewarding_db_v3\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ig_rewarding_db_v3\"\n\nMore Information needed" ]
3c72989086ae32e56c704c0e96b984ad1f301c94
# Dataset Card for "t2i_reward_v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
toilaluan/t2i_reward_v3
[ "region:us" ]
2023-10-29T04:36:00+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "model_type", "dtype": "string"}, {"name": "request_id", "dtype": "int64"}, {"name": "topic", "dtype": "string"}, {"name": "reward", "dtype": "float64"}, {"name": "individual_rewards", "struct": [{"name": "image_rewarder", "dtype": "float64"}, {"name": "hps_v2_rewarder", "dtype": "float64"}]}], "splits": [{"name": "train", "num_bytes": 205400, "num_examples": 2400}], "download_size": 49480, "dataset_size": 205400}}
2023-10-29T04:36:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "t2i_reward_v3" More Information needed
[ "# Dataset Card for \"t2i_reward_v3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"t2i_reward_v3\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"t2i_reward_v3\"\n\nMore Information needed" ]
9cab84910b0eb5a7cd8c301d4bfd46145795c303
# Dataset Card for AudioDataset-15 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description ### Dataset Summary The dataset in question is an audio dataset consisting of recordings in the Urdu language. It has been sourced from Mozilla's Common Voice, a publicly available voice dataset that relies on the contributions of volunteers from various parts of the world. The primary purpose of this dataset is to support the development of voice applications by providing a valuable resource for training machine learning models. The dataset's intended use is to facilitate voice-to-text conversion in the Urdu language. By utilizing this dataset, researchers, developers, and anyone interested in voice technology can train models that accurately convert spoken Urdu words into written text. This can have significant applications in various domains, such as speech recognition, transcription services, language learning tools, and more. ### Languages The dataset consists of audio recordings in the Urdu language. Urdu is a language primarily spoken in Pakistan and parts of India. It is one of the 22 officially recognized languages in India and is also widely spoken by the Pakistani diaspora around the world. The dataset is primarily focused on spoken Urdu, which encompasses a wide range of topics and genres. It is important to note that the dataset's content may vary, covering conversations, speeches, interviews, narratives, and other forms of vocal communication in the Urdu language. ## Dataset Structure ### Data Instances { "client_id": "0c9690e5a2d1bb3ce418954a2b70acae53153708f6c3a21c9e8fe7e3912d97ba805ace5091772c8d4e16dc07fc906ca4956335b87821c244eee8129a15fcb0cf", "file_name": "data/test/common_voice_ur_26641307.mp3", "transcription": "تو ان کے حلاج مدلوں کا کیا حال ہے؟", "up_votes": 2, "down_votes": 0, "age": "twenties", "gender": "female", "accent": "", "locale": "ur", "segment": "" } ### Data Fields <li>client_id: A unique identifier for the client or contributor who provided the recording. (Data Type: String)</li> <li>file_name: The file name or path of the audio file. (Data Type: String)</li> <li>transcription: The transcription of the spoken content in the Urdu language. (Data Type: String)</li> <li>up_votes: The number of upvotes received for the recording. (Data Type: Integer)</li> <li>down_votes: The number of downvotes received for the recording. (Data Type: Integer)</li> <li>age: The age group of the speaker. (Data Type: String)</li> <li>gender: The gender of the speaker. (Data Type: String)</li> <li>accent: The accent of the speaker, if applicable. (Data Type: String)</li> <li>locale: The locale or language code, which is "ur" for Urdu in this case. (Data Type: String)</li> <li>segment: Additional segment information, if available. (Data Type: String)</li> ### Data Splits The dataset is divided into three splits: train, test, and validation. The training set is used to train the model, the validation set is used to tune hyperparameters and evaluate model performance during training, and the test set is used to evaluate the final model's performance after training. | | train | validation | test | |-------------------------|------:|-----------:|-----: | | Amount | 5324 | 42418 | 4031 |
HowMannyMore/urdu-audiodataset
[ "task_categories:conversational", "task_categories:translation", "language:ur", "code", "region:us" ]
2023-10-29T04:57:11+00:00
{"language": ["ur"], "task_categories": ["conversational", "translation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "client_id", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accents", "dtype": "string"}, {"name": "variant", "dtype": "float64"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 133629462.356, "num_examples": 5324}, {"name": "validation", "num_bytes": 1039373547.526, "num_examples": 42418}, {"name": "test", "num_bytes": 107435663.014, "num_examples": 4031}], "download_size": 1266451644, "dataset_size": 1280438672.896}, "tags": ["code"]}
2023-10-29T06:30:46+00:00
[]
[ "ur" ]
TAGS #task_categories-conversational #task_categories-translation #language-Urdu #code #region-us
Dataset Card for AudioDataset-15 ================================ Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits Dataset Description ------------------- ### Dataset Summary The dataset in question is an audio dataset consisting of recordings in the Urdu language. It has been sourced from Mozilla's Common Voice, a publicly available voice dataset that relies on the contributions of volunteers from various parts of the world. The primary purpose of this dataset is to support the development of voice applications by providing a valuable resource for training machine learning models. The dataset's intended use is to facilitate voice-to-text conversion in the Urdu language. By utilizing this dataset, researchers, developers, and anyone interested in voice technology can train models that accurately convert spoken Urdu words into written text. This can have significant applications in various domains, such as speech recognition, transcription services, language learning tools, and more. ### Languages The dataset consists of audio recordings in the Urdu language. Urdu is a language primarily spoken in Pakistan and parts of India. It is one of the 22 officially recognized languages in India and is also widely spoken by the Pakistani diaspora around the world. The dataset is primarily focused on spoken Urdu, which encompasses a wide range of topics and genres. It is important to note that the dataset's content may vary, covering conversations, speeches, interviews, narratives, and other forms of vocal communication in the Urdu language. Dataset Structure ----------------- ### Data Instances { "client\_id": "0c9690e5a2d1bb3ce418954a2b70acae53153708f6c3a21c9e8fe7e3912d97ba805ace5091772c8d4e16dc07fc906ca4956335b87821c244eee8129a15fcb0cf", "file\_name": "data/test/common\_voice\_ur\_26641307.mp3", "transcription": "تو ان کے حلاج مدلوں کا کیا حال ہے؟", "up\_votes": 2, "down\_votes": 0, "age": "twenties", "gender": "female", "accent": "", "locale": "ur", "segment": "" } ### Data Fields - client\_id: A unique identifier for the client or contributor who provided the recording. (Data Type: String) - file\_name: The file name or path of the audio file. (Data Type: String) - transcription: The transcription of the spoken content in the Urdu language. (Data Type: String) - up\_votes: The number of upvotes received for the recording. (Data Type: Integer) - down\_votes: The number of downvotes received for the recording. (Data Type: Integer) - age: The age group of the speaker. (Data Type: String) - gender: The gender of the speaker. (Data Type: String) - accent: The accent of the speaker, if applicable. (Data Type: String) - locale: The locale or language code, which is "ur" for Urdu in this case. (Data Type: String) - segment: Additional segment information, if available. (Data Type: String) ### Data Splits The dataset is divided into three splits: train, test, and validation. The training set is used to train the model, the validation set is used to tune hyperparameters and evaluate model performance during training, and the test set is used to evaluate the final model's performance after training.
[ "### Dataset Summary\n\n\nThe dataset in question is an audio dataset consisting of recordings in the Urdu language. It has been sourced from Mozilla's Common Voice, a publicly available voice dataset that relies on the contributions of volunteers from various parts of the world. The primary purpose of this dataset is to support the development of voice applications by providing a valuable resource for training machine learning models.\n\n\nThe dataset's intended use is to facilitate voice-to-text conversion in the Urdu language. By utilizing this dataset, researchers, developers, and anyone interested in voice technology can train models that accurately convert spoken Urdu words into written text. This can have significant applications in various domains, such as speech recognition, transcription services, language learning tools, and more.", "### Languages\n\n\nThe dataset consists of audio recordings in the Urdu language. Urdu is a language primarily spoken in Pakistan and parts of India. It is one of the 22 officially recognized languages in India and is also widely spoken by the Pakistani diaspora around the world.\n\n\nThe dataset is primarily focused on spoken Urdu, which encompasses a wide range of topics and genres. It is important to note that the dataset's content may vary, covering conversations, speeches, interviews, narratives, and other forms of vocal communication in the Urdu language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{\n\"client\\_id\": \"0c9690e5a2d1bb3ce418954a2b70acae53153708f6c3a21c9e8fe7e3912d97ba805ace5091772c8d4e16dc07fc906ca4956335b87821c244eee8129a15fcb0cf\",\n\"file\\_name\": \"data/test/common\\_voice\\_ur\\_26641307.mp3\",\n\"transcription\": \"تو ان کے حلاج مدلوں کا کیا حال ہے؟\",\n\"up\\_votes\": 2,\n\"down\\_votes\": 0,\n\"age\": \"twenties\",\n\"gender\": \"female\",\n\"accent\": \"\",\n\"locale\": \"ur\",\n\"segment\": \"\"\n}", "### Data Fields\n\n\n- client\\_id: A unique identifier for the client or contributor who provided the recording. (Data Type: String)\n\n- file\\_name: The file name or path of the audio file. (Data Type: String)\n\n- transcription: The transcription of the spoken content in the Urdu language. (Data Type: String)\n\n- up\\_votes: The number of upvotes received for the recording. (Data Type: Integer)\n\n- down\\_votes: The number of downvotes received for the recording. (Data Type: Integer)\n\n- age: The age group of the speaker. (Data Type: String)\n\n- gender: The gender of the speaker. (Data Type: String)\n\n- accent: The accent of the speaker, if applicable. (Data Type: String)\n\n- locale: The locale or language code, which is \"ur\" for Urdu in this case. (Data Type: String)\n\n- segment: Additional segment information, if available. (Data Type: String)", "### Data Splits\n\n\nThe dataset is divided into three splits: train, test, and validation. The training set is used to train the model, the validation set is used to tune hyperparameters and evaluate model performance during training, and the test set is used to evaluate the final model's performance after training." ]
[ "TAGS\n#task_categories-conversational #task_categories-translation #language-Urdu #code #region-us \n", "### Dataset Summary\n\n\nThe dataset in question is an audio dataset consisting of recordings in the Urdu language. It has been sourced from Mozilla's Common Voice, a publicly available voice dataset that relies on the contributions of volunteers from various parts of the world. The primary purpose of this dataset is to support the development of voice applications by providing a valuable resource for training machine learning models.\n\n\nThe dataset's intended use is to facilitate voice-to-text conversion in the Urdu language. By utilizing this dataset, researchers, developers, and anyone interested in voice technology can train models that accurately convert spoken Urdu words into written text. This can have significant applications in various domains, such as speech recognition, transcription services, language learning tools, and more.", "### Languages\n\n\nThe dataset consists of audio recordings in the Urdu language. Urdu is a language primarily spoken in Pakistan and parts of India. It is one of the 22 officially recognized languages in India and is also widely spoken by the Pakistani diaspora around the world.\n\n\nThe dataset is primarily focused on spoken Urdu, which encompasses a wide range of topics and genres. It is important to note that the dataset's content may vary, covering conversations, speeches, interviews, narratives, and other forms of vocal communication in the Urdu language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{\n\"client\\_id\": \"0c9690e5a2d1bb3ce418954a2b70acae53153708f6c3a21c9e8fe7e3912d97ba805ace5091772c8d4e16dc07fc906ca4956335b87821c244eee8129a15fcb0cf\",\n\"file\\_name\": \"data/test/common\\_voice\\_ur\\_26641307.mp3\",\n\"transcription\": \"تو ان کے حلاج مدلوں کا کیا حال ہے؟\",\n\"up\\_votes\": 2,\n\"down\\_votes\": 0,\n\"age\": \"twenties\",\n\"gender\": \"female\",\n\"accent\": \"\",\n\"locale\": \"ur\",\n\"segment\": \"\"\n}", "### Data Fields\n\n\n- client\\_id: A unique identifier for the client or contributor who provided the recording. (Data Type: String)\n\n- file\\_name: The file name or path of the audio file. (Data Type: String)\n\n- transcription: The transcription of the spoken content in the Urdu language. (Data Type: String)\n\n- up\\_votes: The number of upvotes received for the recording. (Data Type: Integer)\n\n- down\\_votes: The number of downvotes received for the recording. (Data Type: Integer)\n\n- age: The age group of the speaker. (Data Type: String)\n\n- gender: The gender of the speaker. (Data Type: String)\n\n- accent: The accent of the speaker, if applicable. (Data Type: String)\n\n- locale: The locale or language code, which is \"ur\" for Urdu in this case. (Data Type: String)\n\n- segment: Additional segment information, if available. (Data Type: String)", "### Data Splits\n\n\nThe dataset is divided into three splits: train, test, and validation. The training set is used to train the model, the validation set is used to tune hyperparameters and evaluate model performance during training, and the test set is used to evaluate the final model's performance after training." ]
[ 32, 170, 140, 206, 227, 72 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-translation #language-Urdu #code #region-us \n### Dataset Summary\n\n\nThe dataset in question is an audio dataset consisting of recordings in the Urdu language. It has been sourced from Mozilla's Common Voice, a publicly available voice dataset that relies on the contributions of volunteers from various parts of the world. The primary purpose of this dataset is to support the development of voice applications by providing a valuable resource for training machine learning models.\n\n\nThe dataset's intended use is to facilitate voice-to-text conversion in the Urdu language. By utilizing this dataset, researchers, developers, and anyone interested in voice technology can train models that accurately convert spoken Urdu words into written text. This can have significant applications in various domains, such as speech recognition, transcription services, language learning tools, and more.### Languages\n\n\nThe dataset consists of audio recordings in the Urdu language. Urdu is a language primarily spoken in Pakistan and parts of India. It is one of the 22 officially recognized languages in India and is also widely spoken by the Pakistani diaspora around the world.\n\n\nThe dataset is primarily focused on spoken Urdu, which encompasses a wide range of topics and genres. It is important to note that the dataset's content may vary, covering conversations, speeches, interviews, narratives, and other forms of vocal communication in the Urdu language.\n\n\nDataset Structure\n-----------------" ]
977c60eac836e5d00f6119baf2a56f8e7e8260f7
# Dataset Card for "sql-create-context-5000rows" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Naveengo/sql-create-context-5000rows
[ "region:us" ]
2023-10-29T05:18:20+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1104644.8706364457, "num_examples": 5000}], "download_size": 548687, "dataset_size": 1104644.8706364457}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T05:18:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sql-create-context-5000rows" More Information needed
[ "# Dataset Card for \"sql-create-context-5000rows\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sql-create-context-5000rows\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sql-create-context-5000rows\"\n\nMore Information needed" ]
d74bbf6403c377c86e37e99862b70cafea756c6c
# Topic: Using gpt4 api to solve the following question A {x} * {y} grid map with {(x)*(y)} blue dots(with x from 0 to {x-1} and y from 0 to {y-1}) representing coordinates and lines connecting them. Each adjacent coordinate is connected by a blue straight line, either horizontally or vertically, and the distance of 1. The grid starts from (0,0) at the bottom left and goes up to {x-1} * {y-1} at the top right. Obstacles on the grid are represented by the absence of blue dots at specific coordinates, as well as the missing adjacent blue lines connecting them. The coordinates with obstacles are:(obs_coordinates) with x and y from 0 to {x-1}, corresponding to the coordinate system. Thus, I need to find the shortest path from (0,0)to {x-1} * {y-1} to solve this problem you are required 1. moving each step either horizontally or vertically and avoiding obstacles by distance of 1. 2. don't recap the problem, please get straight to solve it 3. you will get the history of this problem which you did before 4. only need text-based path coordinates results " # Step 1: Generate random obstacles 1. Grid size: 3*3 to 11*11,Percentage of obstacles: 0%, 5%, 10%, 15%, 20%, 25%, Number of groups: 10 for each percentage # Step 2: Use Dijkstra and Astar to find the shortest path # Step 3: Use gpt4 api to find the shortest path # Step 4: Format the gpt4 output: complete output and summary output # Step 5: Compare Dijkstra/astar and gpt4 answer to evaluate if gpt4 answer is the shortest path * we cannot tell the no path answer by code, you can check it manually 1. length should be equal to the length of Dijkstra 2. Bypass the obstacles 3. Start coordinate and end coordinate should be (0,0) and (x-1,y-1) 3. Each step should be 1
XinyaoHu/gpt4-answer-to-GridPath-2d
[ "region:us" ]
2023-10-29T05:21:03+00:00
{}
2023-11-22T21:04:20+00:00
[]
[]
TAGS #region-us
# Topic: Using gpt4 api to solve the following question A {x} * {y} grid map with {(x)*(y)} blue dots(with x from 0 to {x-1} and y from 0 to {y-1}) representing coordinates and lines connecting them. Each adjacent coordinate is connected by a blue straight line, either horizontally or vertically, and the distance of 1. The grid starts from (0,0) at the bottom left and goes up to {x-1} * {y-1} at the top right. Obstacles on the grid are represented by the absence of blue dots at specific coordinates, as well as the missing adjacent blue lines connecting them. The coordinates with obstacles are:(obs_coordinates) with x and y from 0 to {x-1}, corresponding to the coordinate system. Thus, I need to find the shortest path from (0,0)to {x-1} * {y-1} to solve this problem you are required 1. moving each step either horizontally or vertically and avoiding obstacles by distance of 1. 2. don't recap the problem, please get straight to solve it 3. you will get the history of this problem which you did before 4. only need text-based path coordinates results " # Step 1: Generate random obstacles 1. Grid size: 3*3 to 11*11,Percentage of obstacles: 0%, 5%, 10%, 15%, 20%, 25%, Number of groups: 10 for each percentage # Step 2: Use Dijkstra and Astar to find the shortest path # Step 3: Use gpt4 api to find the shortest path # Step 4: Format the gpt4 output: complete output and summary output # Step 5: Compare Dijkstra/astar and gpt4 answer to evaluate if gpt4 answer is the shortest path * we cannot tell the no path answer by code, you can check it manually 1. length should be equal to the length of Dijkstra 2. Bypass the obstacles 3. Start coordinate and end coordinate should be (0,0) and (x-1,y-1) 3. Each step should be 1
[ "# Topic: Using gpt4 api to solve the following question\n\nA {x} * {y} grid map with {(x)*(y)} blue dots(with x from 0 to {x-1} and y from 0 to {y-1}) representing coordinates and lines connecting them. Each adjacent coordinate is connected by a blue straight line, either horizontally or vertically, and the distance of 1. The grid starts from (0,0) at the bottom left and goes up to {x-1} * {y-1} at the top right. Obstacles on the grid are represented by the absence of blue dots at specific coordinates, as well as the missing adjacent blue lines connecting them. The coordinates with obstacles are:(obs_coordinates) with x and y from 0 to {x-1}, corresponding to the coordinate system. Thus, I need to find the shortest path from (0,0)to {x-1} * {y-1} to solve this problem you are required 1. moving each step either horizontally or vertically and avoiding obstacles by distance of 1. 2. don't recap the problem, please get straight to solve it 3. you will get the history of this problem which you did before 4. only need text-based path coordinates results \"", "# Step 1: Generate random obstacles\n 1. Grid size: 3*3 to 11*11,Percentage of obstacles: 0%, 5%, 10%, 15%, 20%, 25%, Number of groups: 10 for each percentage", "# Step 2: Use Dijkstra and Astar to find the shortest path", "# Step 3: Use gpt4 api to find the shortest path", "# Step 4: Format the gpt4 output: complete output and summary output", "# Step 5: Compare Dijkstra/astar and gpt4 answer to evaluate if gpt4 answer is the shortest path\n* we cannot tell the no path answer by code, you can check it manually\n 1. length should be equal to the length of Dijkstra \n 2. Bypass the obstacles\n 3. Start coordinate and end coordinate should be (0,0) and (x-1,y-1)\n 3. Each step should be 1" ]
[ "TAGS\n#region-us \n", "# Topic: Using gpt4 api to solve the following question\n\nA {x} * {y} grid map with {(x)*(y)} blue dots(with x from 0 to {x-1} and y from 0 to {y-1}) representing coordinates and lines connecting them. Each adjacent coordinate is connected by a blue straight line, either horizontally or vertically, and the distance of 1. The grid starts from (0,0) at the bottom left and goes up to {x-1} * {y-1} at the top right. Obstacles on the grid are represented by the absence of blue dots at specific coordinates, as well as the missing adjacent blue lines connecting them. The coordinates with obstacles are:(obs_coordinates) with x and y from 0 to {x-1}, corresponding to the coordinate system. Thus, I need to find the shortest path from (0,0)to {x-1} * {y-1} to solve this problem you are required 1. moving each step either horizontally or vertically and avoiding obstacles by distance of 1. 2. don't recap the problem, please get straight to solve it 3. you will get the history of this problem which you did before 4. only need text-based path coordinates results \"", "# Step 1: Generate random obstacles\n 1. Grid size: 3*3 to 11*11,Percentage of obstacles: 0%, 5%, 10%, 15%, 20%, 25%, Number of groups: 10 for each percentage", "# Step 2: Use Dijkstra and Astar to find the shortest path", "# Step 3: Use gpt4 api to find the shortest path", "# Step 4: Format the gpt4 output: complete output and summary output", "# Step 5: Compare Dijkstra/astar and gpt4 answer to evaluate if gpt4 answer is the shortest path\n* we cannot tell the no path answer by code, you can check it manually\n 1. length should be equal to the length of Dijkstra \n 2. Bypass the obstacles\n 3. Start coordinate and end coordinate should be (0,0) and (x-1,y-1)\n 3. Each step should be 1" ]
[ 6, 286, 50, 15, 14, 15, 90 ]
[ "passage: TAGS\n#region-us \n# Topic: Using gpt4 api to solve the following question\n\nA {x} * {y} grid map with {(x)*(y)} blue dots(with x from 0 to {x-1} and y from 0 to {y-1}) representing coordinates and lines connecting them. Each adjacent coordinate is connected by a blue straight line, either horizontally or vertically, and the distance of 1. The grid starts from (0,0) at the bottom left and goes up to {x-1} * {y-1} at the top right. Obstacles on the grid are represented by the absence of blue dots at specific coordinates, as well as the missing adjacent blue lines connecting them. The coordinates with obstacles are:(obs_coordinates) with x and y from 0 to {x-1}, corresponding to the coordinate system. Thus, I need to find the shortest path from (0,0)to {x-1} * {y-1} to solve this problem you are required 1. moving each step either horizontally or vertically and avoiding obstacles by distance of 1. 2. don't recap the problem, please get straight to solve it 3. you will get the history of this problem which you did before 4. only need text-based path coordinates results \"# Step 1: Generate random obstacles\n 1. Grid size: 3*3 to 11*11,Percentage of obstacles: 0%, 5%, 10%, 15%, 20%, 25%, Number of groups: 10 for each percentage# Step 2: Use Dijkstra and Astar to find the shortest path# Step 3: Use gpt4 api to find the shortest path# Step 4: Format the gpt4 output: complete output and summary output# Step 5: Compare Dijkstra/astar and gpt4 answer to evaluate if gpt4 answer is the shortest path\n* we cannot tell the no path answer by code, you can check it manually\n 1. length should be equal to the length of Dijkstra \n 2. Bypass the obstacles\n 3. Start coordinate and end coordinate should be (0,0) and (x-1,y-1)\n 3. Each step should be 1" ]
b2a7b1ea1b53cf75bc06abb6047fdbed6b8e2406
# Dataset Card for "Ultrachat-Multiple-Conversations-Alpaca-Style" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
health360/Ultrachat-Multiple-Conversations-Alpaca-Style
[ "region:us" ]
2023-10-29T06:02:56+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9289363199, "num_examples": 1468352}], "download_size": 4593179681, "dataset_size": 9289363199}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T06:08:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Ultrachat-Multiple-Conversations-Alpaca-Style" More Information needed
[ "# Dataset Card for \"Ultrachat-Multiple-Conversations-Alpaca-Style\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Ultrachat-Multiple-Conversations-Alpaca-Style\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Ultrachat-Multiple-Conversations-Alpaca-Style\"\n\nMore Information needed" ]
6319fda2c9ae6ea91a5c039ffaaae5603b6639ae
# Dataset Card for "CPTDS-3" 1. CPTDS-3 dataset is made up of coding problem questions from multiple coding websites. 2. The 'DS' in the name stands for data structures and the '3' indicates that the questions belong to 3 mutually exclusive categories 3. The dataset was prepared for the research work names [Stacking of Hyperparameter Tuned Models for Tagging Coding Problems](https://arxiv.org/abs/2306.10077#:~:text=In%20this%20work%2C%20we%20propose,models%20developed%20for%20this%20work.) ## Languages The dataset consists of questions only in English ## Dataset Structure ### Data Instances For each instance, there is a string for the question, a string for the class label. ``` {'question': 'Andrew love sea that s height summer season decide beach take sunbe sunbatheThe beach rectangular field n row m column some cell beach free road stone shop nonmovable object some adjacent cell sunbed locate horizontally verticallyAndrew hope sunbe that s bad luck long free place that s Andrew ask help find free place sunbe Andrews sunbe place adjacent cell if adjacent free cell order free place sunbe disturb tourist you follow action come sunbe cause p unit discomfort owner lift sunbe side rotate 90 degree one half sunbe remain cell half sunbe free cell at time way sunbe rotation Rotation sunbe 90 degree cell 1 2 come sunbe cause q unit discomfort owner shift sunbe long cell one half sunbe place free cell Shift sunbe cell right in moment sunbe occupie adjacent free cell you sunbe timehelp Andrew free space sunbe cause minimum possible number unit discomfort tourist detect impossible', 'label': 1} ``` The average token count for the articles and the highlights are provided below: | Feature | Mean Token Count | | ---------- | ---------------- | | Question | 94.02 | ### Data Fields - `question`: a string containing the question of the coding problem - `label` : a string containing the tag of the question ### Data Splits The CPTDS-3 dataset has just 1 split: _train_. Below is the statistics for the dataset. | Dataset Split | Number of Instances in Split | | ------------- | -------------------------------- | | Train | 3012 | [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SkAndMl/CPTDS-3
[ "task_categories:text-classification", "language:en", "arxiv:2306.10077", "doi:10.57967/hf/1284", "region:us" ]
2023-10-29T06:03:43+00:00
{"language": ["en"], "task_categories": ["text-classification"], "pretty_name": "cptds-3", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "array", "1": "graph", "2": "string"}}}}], "splits": [{"name": "train", "num_bytes": 1836512, "num_examples": 3012}], "download_size": 874048, "dataset_size": 1836512}}
2023-10-29T06:23:31+00:00
[ "2306.10077" ]
[ "en" ]
TAGS #task_categories-text-classification #language-English #arxiv-2306.10077 #doi-10.57967/hf/1284 #region-us
Dataset Card for "CPTDS-3" ========================== 1. CPTDS-3 dataset is made up of coding problem questions from multiple coding websites. 2. The 'DS' in the name stands for data structures and the '3' indicates that the questions belong to 3 mutually exclusive categories 3. The dataset was prepared for the research work names Stacking of Hyperparameter Tuned Models for Tagging Coding Problems Languages --------- The dataset consists of questions only in English Dataset Structure ----------------- ### Data Instances For each instance, there is a string for the question, a string for the class label. The average token count for the articles and the highlights are provided below: ### Data Fields * 'question': a string containing the question of the coding problem * 'label' : a string containing the tag of the question ### Data Splits The CPTDS-3 dataset has just 1 split: *train*. Below is the statistics for the dataset. More Information needed
[ "### Data Instances\n\n\nFor each instance, there is a string for the question, a string for the class label.\n\n\nThe average token count for the articles and the highlights are provided below:", "### Data Fields\n\n\n* 'question': a string containing the question of the coding problem\n* 'label' : a string containing the tag of the question", "### Data Splits\n\n\nThe CPTDS-3 dataset has just 1 split: *train*. Below is the statistics for the dataset.\n\n\n\nMore Information needed" ]
[ "TAGS\n#task_categories-text-classification #language-English #arxiv-2306.10077 #doi-10.57967/hf/1284 #region-us \n", "### Data Instances\n\n\nFor each instance, there is a string for the question, a string for the class label.\n\n\nThe average token count for the articles and the highlights are provided below:", "### Data Fields\n\n\n* 'question': a string containing the question of the coding problem\n* 'label' : a string containing the tag of the question", "### Data Splits\n\n\nThe CPTDS-3 dataset has just 1 split: *train*. Below is the statistics for the dataset.\n\n\n\nMore Information needed" ]
[ 41, 41, 36, 36 ]
[ "passage: TAGS\n#task_categories-text-classification #language-English #arxiv-2306.10077 #doi-10.57967/hf/1284 #region-us \n### Data Instances\n\n\nFor each instance, there is a string for the question, a string for the class label.\n\n\nThe average token count for the articles and the highlights are provided below:### Data Fields\n\n\n* 'question': a string containing the question of the coding problem\n* 'label' : a string containing the tag of the question### Data Splits\n\n\nThe CPTDS-3 dataset has just 1 split: *train*. Below is the statistics for the dataset.\n\n\n\nMore Information needed" ]
17294bcca10b4daba3055f480986f52e754095a0
# Special Note We have open-sourced a preliminary version of our dataset. However, please note that the experimental version of the dataset, which includes additional images, is currently undergoing review by our school's ethics committee. We will update the repository with the latest version of the dataset as soon as possible. Thank you for your understanding. # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> vrpbench is a benchmark dataset designed for visual referring prompting. The dataset includes original images and their variants annotated with specific referring prompts. The original images are sourced from (1). [Mathvista](https://huggingface.co/datasets/AI4Math/MathVista) (2). We manually craft some examples. The variants are manually labeled and recorded by the creators. Each image is accompanied by a question that has been created and verified by humans. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** Zonkey LEE - **Funded by [optional]:** HKUST CSE - **Shared by [optional]:** SKYWF - **Language(s) (NLP):** EN - **License:** cc-by-4.0 <!-- ### Dataset Sources [optional] --> <!-- Provide the basic links for the dataset. --> <!-- - **Repository:** [More Information Needed] --> <!-- - **Paper [optional]:** [More Information Needed] --> <!-- - **Demo [optional]:** [More Information Needed] --> ## License The new contributions to our dataset are distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license, including - The creation of our dataset; - The filtering and cleaning of source datasets; - The standard formalization of instances for evaluation purposes; - The annotations of metadata. The copyright of the images and the questions belongs to the original authors, The copyright of newly introduced images, and all the questions belong to Zonkey LEE. Alongside this license, the following conditions apply: - **Purpose:** The dataset was primarily designed for use as a test set. - **Commercial Use:** The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license. ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Data Downloading All the data examples were in *test* dataset. - **test**: 2,145 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released. You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)): ```python from datasets import load_dataset dataset = load_dataset("mujif/VisualReferPrompt") ``` Here are some examples of how to access the downloaded dataset: ```python # print the first example on the test set print(dataset["test"][0]) print(dataset["test"][0]['qid']) # print the problem id print(dataset["test"][0]['category']) # print the question category print(dataset["test"][0]['ori_img']) # print the image path print(dataset["test"][0]['question']) # print the query text print(dataset["test"][0]['gt_answer']) # print the answer print(dataset["test"][0]['img_size']) # print the img size print(dataset["test"][0]['vis_ref_type']) # print the answer print(dataset["test"][0]['details']) # print the answer dataset["test"][0]['image'] # display the image # print the first example on the test set print(dataset["test"][0]) ``` ## Dataset Creation ### Data Source The **VisualReferPrompt** dataset is derived from newly collected dataset MathVista, which contains three datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. All these source datasets have been preprocessed and labeled for evaluation purposes. ### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> Notably, to aviod personal information and follow the rules of current LMMs, we **We do not include any portrait images**. ### Automatic Evaluation 🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](). ## Citation If you use the **VisualReferPrompt** dataset in your work, please kindly cite the paper using this BibTeX: Our paper will soon be published, please wait.
mujif/VisualReferPrompt
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:visual-question-answering", "language_creators:expert-generated", "language_creators:found", "size_categories:1K<n<10K", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-10-29T06:04:17+00:00
{"language_creators": ["expert-generated", "found"], "language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["multiple-choice", "question-answering", "visual-question-answering"], "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "qid", "dtype": "int64"}, {"name": "category", "dtype": "string"}, {"name": "ori_image", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "gt_answer", "dtype": "string"}, {"name": "img_size", "dtype": "string"}, {"name": "vis_ref_type", "dtype": "string"}, {"name": "details", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 86532947.615, "num_examples": 2145}], "download_size": 90509102, "dataset_size": 86532947.615}}
2023-12-18T04:34:15+00:00
[]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-visual-question-answering #language_creators-expert-generated #language_creators-found #size_categories-1K<n<10K #language-English #license-cc-by-sa-4.0 #region-us
# Special Note We have open-sourced a preliminary version of our dataset. However, please note that the experimental version of the dataset, which includes additional images, is currently undergoing review by our school's ethics committee. We will update the repository with the latest version of the dataset as soon as possible. Thank you for your understanding. # Dataset Card for Dataset Name vrpbench is a benchmark dataset designed for visual referring prompting. The dataset includes original images and their variants annotated with specific referring prompts. The original images are sourced from (1). Mathvista (2). We manually craft some examples. The variants are manually labeled and recorded by the creators. Each image is accompanied by a question that has been created and verified by humans. ## Dataset Details ### Dataset Description - Curated by: Zonkey LEE - Funded by [optional]: HKUST CSE - Shared by [optional]: SKYWF - Language(s) (NLP): EN - License: cc-by-4.0 ## License The new contributions to our dataset are distributed under the CC BY-SA 4.0 license, including - The creation of our dataset; - The filtering and cleaning of source datasets; - The standard formalization of instances for evaluation purposes; - The annotations of metadata. The copyright of the images and the questions belongs to the original authors, The copyright of newly introduced images, and all the questions belong to Zonkey LEE. Alongside this license, the following conditions apply: - Purpose: The dataset was primarily designed for use as a test set. - Commercial Use: The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the CC BY-SA 4.0 license. ## Uses ### Data Downloading All the data examples were in *test* dataset. - test: 2,145 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released. You can download this dataset by the following command (make sure that you have installed Huggingface Datasets): Here are some examples of how to access the downloaded dataset: ## Dataset Creation ### Data Source The VisualReferPrompt dataset is derived from newly collected dataset MathVista, which contains three datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. All these source datasets have been preprocessed and labeled for evaluation purposes. ### Personal and Sensitive Information Notably, to aviod personal information and follow the rules of current LMMs, we We do not include any portrait images. ### Automatic Evaluation To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](). If you use the VisualReferPrompt dataset in your work, please kindly cite the paper using this BibTeX: Our paper will soon be published, please wait.
[ "# Special Note\n\nWe have open-sourced a preliminary version of our dataset. \nHowever, please note that the experimental version of the dataset, which includes additional images, \nis currently undergoing review by our school's ethics committee. \nWe will update the repository with the latest version of the dataset as soon as possible. Thank you for your understanding.", "# Dataset Card for Dataset Name\n\n\n\n\nvrpbench is a benchmark dataset designed for visual referring prompting. \nThe dataset includes original images and their variants annotated with specific referring prompts. \nThe original images are sourced from \n\n(1). Mathvista \n(2). We manually craft some examples.\n\nThe variants are manually labeled and recorded by the creators. \nEach image is accompanied by a question that has been created and verified by humans.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: Zonkey LEE\n- Funded by [optional]: HKUST CSE \n- Shared by [optional]: SKYWF\n- Language(s) (NLP): EN\n- License: cc-by-4.0", "## License\n\nThe new contributions to our dataset are distributed under the CC BY-SA 4.0 license, including\n\n- The creation of our dataset;\n- The filtering and cleaning of source datasets;\n- The standard formalization of instances for evaluation purposes;\n- The annotations of metadata.\n\nThe copyright of the images and the questions belongs to the original authors, \nThe copyright of newly introduced images, and all the questions belong to Zonkey LEE.\n\nAlongside this license, the following conditions apply:\n\n- Purpose: The dataset was primarily designed for use as a test set.\n- Commercial Use: The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the CC BY-SA 4.0 license.", "## Uses", "### Data Downloading\n\nAll the data examples were in *test* dataset.\n\n- test: 2,145 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released.\n\n\nYou can download this dataset by the following command (make sure that you have installed Huggingface Datasets):\n\n\n\nHere are some examples of how to access the downloaded dataset:", "## Dataset Creation", "### Data Source\n\nThe VisualReferPrompt dataset is derived from newly collected dataset MathVista, which contains three datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. All these source datasets have been preprocessed and labeled for evaluation purposes.", "### Personal and Sensitive Information\n\n\n\nNotably, to aviod personal information and follow the rules of current LMMs, we We do not include any portrait images.", "### Automatic Evaluation\n\n To automatically evaluate a model on the dataset, please refer to our GitHub repository [here]().\n\n\nIf you use the VisualReferPrompt dataset in your work, please kindly cite the paper using this BibTeX:\n\nOur paper will soon be published, please wait." ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-visual-question-answering #language_creators-expert-generated #language_creators-found #size_categories-1K<n<10K #language-English #license-cc-by-sa-4.0 #region-us \n", "# Special Note\n\nWe have open-sourced a preliminary version of our dataset. \nHowever, please note that the experimental version of the dataset, which includes additional images, \nis currently undergoing review by our school's ethics committee. \nWe will update the repository with the latest version of the dataset as soon as possible. Thank you for your understanding.", "# Dataset Card for Dataset Name\n\n\n\n\nvrpbench is a benchmark dataset designed for visual referring prompting. \nThe dataset includes original images and their variants annotated with specific referring prompts. \nThe original images are sourced from \n\n(1). Mathvista \n(2). We manually craft some examples.\n\nThe variants are manually labeled and recorded by the creators. \nEach image is accompanied by a question that has been created and verified by humans.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: Zonkey LEE\n- Funded by [optional]: HKUST CSE \n- Shared by [optional]: SKYWF\n- Language(s) (NLP): EN\n- License: cc-by-4.0", "## License\n\nThe new contributions to our dataset are distributed under the CC BY-SA 4.0 license, including\n\n- The creation of our dataset;\n- The filtering and cleaning of source datasets;\n- The standard formalization of instances for evaluation purposes;\n- The annotations of metadata.\n\nThe copyright of the images and the questions belongs to the original authors, \nThe copyright of newly introduced images, and all the questions belong to Zonkey LEE.\n\nAlongside this license, the following conditions apply:\n\n- Purpose: The dataset was primarily designed for use as a test set.\n- Commercial Use: The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the CC BY-SA 4.0 license.", "## Uses", "### Data Downloading\n\nAll the data examples were in *test* dataset.\n\n- test: 2,145 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released.\n\n\nYou can download this dataset by the following command (make sure that you have installed Huggingface Datasets):\n\n\n\nHere are some examples of how to access the downloaded dataset:", "## Dataset Creation", "### Data Source\n\nThe VisualReferPrompt dataset is derived from newly collected dataset MathVista, which contains three datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. All these source datasets have been preprocessed and labeled for evaluation purposes.", "### Personal and Sensitive Information\n\n\n\nNotably, to aviod personal information and follow the rules of current LMMs, we We do not include any portrait images.", "### Automatic Evaluation\n\n To automatically evaluate a model on the dataset, please refer to our GitHub repository [here]().\n\n\nIf you use the VisualReferPrompt dataset in your work, please kindly cite the paper using this BibTeX:\n\nOur paper will soon be published, please wait." ]
[ 91, 78, 102, 4, 58, 194, 3, 85, 5, 74, 36, 71 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-visual-question-answering #language_creators-expert-generated #language_creators-found #size_categories-1K<n<10K #language-English #license-cc-by-sa-4.0 #region-us \n# Special Note\n\nWe have open-sourced a preliminary version of our dataset. \nHowever, please note that the experimental version of the dataset, which includes additional images, \nis currently undergoing review by our school's ethics committee. \nWe will update the repository with the latest version of the dataset as soon as possible. Thank you for your understanding.# Dataset Card for Dataset Name\n\n\n\n\nvrpbench is a benchmark dataset designed for visual referring prompting. \nThe dataset includes original images and their variants annotated with specific referring prompts. \nThe original images are sourced from \n\n(1). Mathvista \n(2). We manually craft some examples.\n\nThe variants are manually labeled and recorded by the creators. \nEach image is accompanied by a question that has been created and verified by humans.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: Zonkey LEE\n- Funded by [optional]: HKUST CSE \n- Shared by [optional]: SKYWF\n- Language(s) (NLP): EN\n- License: cc-by-4.0" ]
bb97555057f723256e7e916f18ab143bedfae45c
# Dataset Card for "legal-articles-filtered" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aghilrs/legal-articles-filtered
[ "region:us" ]
2023-10-29T06:50:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21112222.54595364, "num_examples": 13981}], "download_size": 8825148, "dataset_size": 21112222.54595364}}
2023-10-29T06:50:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "legal-articles-filtered" More Information needed
[ "# Dataset Card for \"legal-articles-filtered\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"legal-articles-filtered\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"legal-articles-filtered\"\n\nMore Information needed" ]
32debbd8f82b39612c569b6308092b1b45e7d8c1
# Dataset Card for "Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
health360/Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized
[ "region:us" ]
2023-10-29T06:53:41+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 24337034495, "num_examples": 1468352}], "download_size": 8063172866, "dataset_size": 24337034495}}
2023-10-29T08:06:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized" More Information needed
[ "# Dataset Card for \"Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized\"\n\nMore Information needed" ]
[ 6, 34 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized\"\n\nMore Information needed" ]
5e4d165745b702a20c377e715daef4275572ac37
# Dataset Card for "fleurs-hi-en-ST" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) This is a dataset for speech to text translation of hindi to english. dataset used to build this was fleurs & flores
yashtiwari/fleurs-hi-en-ST
[ "region:us" ]
2023-10-29T06:54:39+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "hindi", "dtype": "string"}, {"name": "english", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 1286250983, "num_examples": 876}], "download_size": 824653765, "dataset_size": 1286250983}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T07:06:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fleurs-hi-en-ST" More Information needed This is a dataset for speech to text translation of hindi to english. dataset used to build this was fleurs & flores
[ "# Dataset Card for \"fleurs-hi-en-ST\"\n\nMore Information needed\n\nThis is a dataset for speech to text translation of hindi to english. dataset used to build this was fleurs & flores" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fleurs-hi-en-ST\"\n\nMore Information needed\n\nThis is a dataset for speech to text translation of hindi to english. dataset used to build this was fleurs & flores" ]
[ 6, 44 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fleurs-hi-en-ST\"\n\nMore Information needed\n\nThis is a dataset for speech to text translation of hindi to english. dataset used to build this was fleurs & flores" ]
91144f7345da697fbbb67d84339a73490e065c95
# Dataset Card for "movie_posters-100k-torchvision" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
skvarre/movie_posters-100k-torchvision
[ "region:us" ]
2023-10-29T07:04:02+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "title", "dtype": "string"}, {"name": "genres", "list": [{"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}]}, {"name": "overview", "dtype": "string"}, {"name": "popularity", "dtype": "float64"}, {"name": "release_date", "dtype": "string"}, {"name": "budget", "dtype": "int64"}, {"name": "revenue", "dtype": "int64"}, {"name": "tagline", "dtype": "string"}, {"name": "original_language", "dtype": "string"}, {"name": "runtime", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 28368086498, "num_examples": 95300}], "download_size": 26503296080, "dataset_size": 28368086498}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T07:27:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "movie_posters-100k-torchvision" More Information needed
[ "# Dataset Card for \"movie_posters-100k-torchvision\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"movie_posters-100k-torchvision\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"movie_posters-100k-torchvision\"\n\nMore Information needed" ]
f80533b44f89592158bfe3380f484b532c0b6883
# Dataset Card for "race_random_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/race_random_prompts
[ "region:us" ]
2023-10-29T07:04:23+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121894, "num_examples": 1000}], "download_size": 17634, "dataset_size": 121894}}
2023-10-29T07:04:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "race_random_prompts" More Information needed
[ "# Dataset Card for \"race_random_prompts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"race_random_prompts\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"race_random_prompts\"\n\nMore Information needed" ]
a3273d8dceaf5256502bd22eb0cddbd92f266d94
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for creating python docs from methods. This is formatted from semeru/code-code-galeras-code-completion-from-docstring-3k-deduped ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** semeru/code-code-galeras-code-completion-from-docstring-3k-deduped - **Language(s) (NLP):** Python - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** semeru/code-code-galeras-code-completion-from-docstring-3k-deduped ## Uses <!-- Address questions around how the dataset is intended to be used. --> (use however you like) ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> Code - Method Doc - Documentation for the method Lang - Language Prompt - Prompt field for training
ASHu2/docs-python-v1
[ "task_categories:feature-extraction", "task_categories:text2text-generation", "size_categories:1K<n<10K", "language:en", "license:mit", "code", "region:us" ]
2023-10-29T07:21:36+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["feature-extraction", "text2text-generation"], "pretty_name": "python-method-doc-generation", "tags": ["code"]}
2023-10-29T07:33:21+00:00
[]
[ "en" ]
TAGS #task_categories-feature-extraction #task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #code #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for creating python docs from methods. This is formatted from semeru/code-code-galeras-code-completion-from-docstring-3k-deduped ### Dataset Description - Curated by: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped - Language(s) (NLP): Python - License: ### Dataset Sources [optional] - Repository: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped ## Uses (use however you like) ## Dataset Structure Code - Method Doc - Documentation for the method Lang - Language Prompt - Prompt field for training
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for creating python docs from methods. This is formatted from semeru/code-code-galeras-code-completion-from-docstring-3k-deduped", "### Dataset Description\n\n\n\n\n\n- Curated by: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped\n- Language(s) (NLP): Python\n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped", "## Uses\n\n\n(use however you like)", "## Dataset Structure\n\n\n\nCode - Method\nDoc - Documentation for the method\nLang - Language\nPrompt - Prompt field for training" ]
[ "TAGS\n#task_categories-feature-extraction #task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #code #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for creating python docs from methods. This is formatted from semeru/code-code-galeras-code-completion-from-docstring-3k-deduped", "### Dataset Description\n\n\n\n\n\n- Curated by: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped\n- Language(s) (NLP): Python\n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped", "## Uses\n\n\n(use however you like)", "## Dataset Structure\n\n\n\nCode - Method\nDoc - Documentation for the method\nLang - Language\nPrompt - Prompt field for training" ]
[ 54, 58, 49, 41, 9, 29 ]
[ "passage: TAGS\n#task_categories-feature-extraction #task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #code #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for creating python docs from methods. This is formatted from semeru/code-code-galeras-code-completion-from-docstring-3k-deduped### Dataset Description\n\n\n\n\n\n- Curated by: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped\n- Language(s) (NLP): Python\n- License:### Dataset Sources [optional]\n\n\n\n- Repository: semeru/code-code-galeras-code-completion-from-docstring-3k-deduped## Uses\n\n\n(use however you like)## Dataset Structure\n\n\n\nCode - Method\nDoc - Documentation for the method\nLang - Language\nPrompt - Prompt field for training" ]
7b92e4c62f88f71fb0c8906488ebf8dd6a5a47c6
# Dataset Card for "amazon-product-data-filter" ## Dataset Description - **Homepage:** [τenai.io - AI Consulting](https://www.tenai.io/) - **Point of Contact:** [Iftach Arbel](mailto:[email protected]) ### Dataset Summary The Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Each data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more. ### Data Fields - `asin`: Amazon Standard Identification Number. - `category`: The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on Amazon.com. - `img_url`: Main image URL from the product page. - `title`: Product title, as appears on the product page. - `feature-bullets`: Product feature-bullets list, as they appear on the product page. - `tech_data`: Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic). - `labels`: A processed instance of `feature-bullets` field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a `\n` seperator. - `tech_process`: A processed instance of `tech_data` field. The original tech data was filtered and transformed from a `(key, value)` structure to a natural language text. ### Data Splits The dataset was randomly split into train (70%), validation (20%), test (10%). Since the main usage is text-generation, the train split is to be used for fine-tuning or as a few-shot context. The validation split can be used for tracking perplexity during fine-tuning. The test split should be used to generate text and inspect quality of results. ## Dataset Creation ### Curation Rationale This dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from Amazom.com, parsed, processed, and filtered using various techniques (annotations, rule-based, models). ### Source Data #### Initial Data Collection and Normalization The data was obtained by collected raw HTML data from Amazom.com. ### Annotations The dataset does not contain any additional annotations. ### Personal and Sensitive Information There is no personal information in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset To the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk. ### Other Known Limitations The quality of product listings may vary, and may not be accurate. ## Additional Information ### Dataset Curators The dataset was collected and curated by [Iftach Arbel](mailto:[email protected]). ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{amazon_product_filter, author = {Iftach Arbel}, title = {Amazon Product Dataset Filtered}, year = {2023}, publisher = {Huggingface}, journal = {Huggingface dataset}, howpublished = {\url{https://huggingface.co/datasets/iarbel/amazon-product-data-filter}}, } ```
iarbel/amazon-product-data-filter
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2023-10-29T07:30:06+00:00
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "asin", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "img_url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "feature-bullets", "sequence": "string"}, {"name": "tech_data", "sequence": {"sequence": "string"}}, {"name": "labels", "dtype": "string"}, {"name": "tech_process", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2686223, "num_examples": 716}, {"name": "validation", "num_bytes": 763820, "num_examples": 204}, {"name": "test", "num_bytes": 390684, "num_examples": 103}], "download_size": 2162385, "dataset_size": 3840727}}
2023-11-12T16:59:36+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #region-us
# Dataset Card for "amazon-product-data-filter" ## Dataset Description - Homepage: τenai.io - AI Consulting - Point of Contact: Iftach Arbel ### Dataset Summary The Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Each data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more. ### Data Fields - 'asin': Amazon Standard Identification Number. - 'category': The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on URL. - 'img_url': Main image URL from the product page. - 'title': Product title, as appears on the product page. - 'feature-bullets': Product feature-bullets list, as they appear on the product page. - 'tech_data': Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic). - 'labels': A processed instance of 'feature-bullets' field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a '\n' seperator. - 'tech_process': A processed instance of 'tech_data' field. The original tech data was filtered and transformed from a '(key, value)' structure to a natural language text. ### Data Splits The dataset was randomly split into train (70%), validation (20%), test (10%). Since the main usage is text-generation, the train split is to be used for fine-tuning or as a few-shot context. The validation split can be used for tracking perplexity during fine-tuning. The test split should be used to generate text and inspect quality of results. ## Dataset Creation ### Curation Rationale This dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from URL, parsed, processed, and filtered using various techniques (annotations, rule-based, models). ### Source Data #### Initial Data Collection and Normalization The data was obtained by collected raw HTML data from URL. ### Annotations The dataset does not contain any additional annotations. ### Personal and Sensitive Information There is no personal information in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset To the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk. ### Other Known Limitations The quality of product listings may vary, and may not be accurate. ## Additional Information ### Dataset Curators The dataset was collected and curated by Iftach Arbel. ### Licensing Information The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
[ "# Dataset Card for \"amazon-product-data-filter\"", "## Dataset Description\n\n- Homepage: τenai.io - AI Consulting\n- Point of Contact: Iftach Arbel", "### Dataset Summary\n\nThe Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more.", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nEach data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more.", "### Data Fields\n\n- 'asin': Amazon Standard Identification Number.\n- 'category': The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on URL.\n- 'img_url': Main image URL from the product page.\n- 'title': Product title, as appears on the product page.\n- 'feature-bullets': Product feature-bullets list, as they appear on the product page.\n- 'tech_data': Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic).\n- 'labels': A processed instance of 'feature-bullets' field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a '\\n' seperator.\n- 'tech_process': A processed instance of 'tech_data' field. The original tech data was filtered and transformed from a '(key, value)' structure to a natural language text.", "### Data Splits\nThe dataset was randomly split into train (70%), validation (20%), test (10%). Since the main usage is text-generation, the train split is to be used for fine-tuning or as a few-shot context. The validation split can be used for tracking perplexity during fine-tuning. The test split should be used to generate text and inspect quality of results.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from URL, parsed, processed, and filtered using various techniques (annotations, rule-based, models).", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data was obtained by collected raw HTML data from URL.", "### Annotations\n\nThe dataset does not contain any additional annotations.", "### Personal and Sensitive Information\n\nThere is no personal information in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nTo the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk.", "### Other Known Limitations\n\nThe quality of product listings may vary, and may not be accurate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was collected and curated by Iftach Arbel.", "### Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ "TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #region-us \n", "# Dataset Card for \"amazon-product-data-filter\"", "## Dataset Description\n\n- Homepage: τenai.io - AI Consulting\n- Point of Contact: Iftach Arbel", "### Dataset Summary\n\nThe Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more.", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nEach data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more.", "### Data Fields\n\n- 'asin': Amazon Standard Identification Number.\n- 'category': The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on URL.\n- 'img_url': Main image URL from the product page.\n- 'title': Product title, as appears on the product page.\n- 'feature-bullets': Product feature-bullets list, as they appear on the product page.\n- 'tech_data': Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic).\n- 'labels': A processed instance of 'feature-bullets' field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a '\\n' seperator.\n- 'tech_process': A processed instance of 'tech_data' field. The original tech data was filtered and transformed from a '(key, value)' structure to a natural language text.", "### Data Splits\nThe dataset was randomly split into train (70%), validation (20%), test (10%). Since the main usage is text-generation, the train split is to be used for fine-tuning or as a few-shot context. The validation split can be used for tracking perplexity during fine-tuning. The test split should be used to generate text and inspect quality of results.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from URL, parsed, processed, and filtered using various techniques (annotations, rule-based, models).", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data was obtained by collected raw HTML data from URL.", "### Annotations\n\nThe dataset does not contain any additional annotations.", "### Personal and Sensitive Information\n\nThere is no personal information in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nTo the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk.", "### Other Known Limitations\n\nThe quality of product listings may vary, and may not be accurate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was collected and curated by Iftach Arbel.", "### Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ 44, 15, 24, 57, 14, 6, 34, 287, 90, 5, 74, 4, 24, 17, 18, 8, 47, 23, 5, 21, 26 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #region-us \n# Dataset Card for \"amazon-product-data-filter\"## Dataset Description\n\n- Homepage: τenai.io - AI Consulting\n- Point of Contact: Iftach Arbel### Dataset Summary\n\nThe Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more.### Languages\n\nThe text in the dataset is in English.## Dataset Structure### Data Instances\n\nEach data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more.### Data Fields\n\n- 'asin': Amazon Standard Identification Number.\n- 'category': The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on URL.\n- 'img_url': Main image URL from the product page.\n- 'title': Product title, as appears on the product page.\n- 'feature-bullets': Product feature-bullets list, as they appear on the product page.\n- 'tech_data': Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic).\n- 'labels': A processed instance of 'feature-bullets' field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a '\\n' seperator.\n- 'tech_process': A processed instance of 'tech_data' field. The original tech data was filtered and transformed from a '(key, value)' structure to a natural language text." ]
c9456cef9bd695c84434ef3242787de3611d9d60
This dataset is a machine-translated version of [databricks-dolly-15k.jsonl](https://huggingface.co/datasets/databricks/databricks-dolly-15k) into Azerbaijani. Dataset size is 8k. ----- # Summary `databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode). Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: English Version: 1.0
w95/databricks-dolly-15k-az
[ "task_categories:question-answering", "task_categories:summarization", "size_categories:1K<n<10K", "language:az", "license:cc-by-sa-3.0", "arxiv:2203.02155", "region:us" ]
2023-10-29T07:43:06+00:00
{"language": ["az"], "license": "cc-by-sa-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "summarization"]}
2023-10-29T07:51:38+00:00
[ "2203.02155" ]
[ "az" ]
TAGS #task_categories-question-answering #task_categories-summarization #size_categories-1K<n<10K #language-Azerbaijani #license-cc-by-sa-3.0 #arxiv-2203.02155 #region-us
This dataset is a machine-translated version of URL into Azerbaijani. Dataset size is 8k. ----- # Summary 'databricks-dolly-15k' is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization. This dataset can be used for any purpose, whether academic or commercial, under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: English Version: 1.0
[ "# Summary\n'databricks-dolly-15k' is an open source dataset of instruction-following records generated by thousands of Databricks employees in several \nof the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, \nclosed QA, generation, information extraction, open QA, and summarization.\n\nThis dataset can be used for any purpose, whether academic or commercial, under the terms of the \nCreative Commons Attribution-ShareAlike 3.0 Unported License.\n\nSupported Tasks: \n- Training LLMs\n- Synthetic Data Generation\n- Data Augmentation\n \nLanguages: English\nVersion: 1.0" ]
[ "TAGS\n#task_categories-question-answering #task_categories-summarization #size_categories-1K<n<10K #language-Azerbaijani #license-cc-by-sa-3.0 #arxiv-2203.02155 #region-us \n", "# Summary\n'databricks-dolly-15k' is an open source dataset of instruction-following records generated by thousands of Databricks employees in several \nof the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, \nclosed QA, generation, information extraction, open QA, and summarization.\n\nThis dataset can be used for any purpose, whether academic or commercial, under the terms of the \nCreative Commons Attribution-ShareAlike 3.0 Unported License.\n\nSupported Tasks: \n- Training LLMs\n- Synthetic Data Generation\n- Data Augmentation\n \nLanguages: English\nVersion: 1.0" ]
[ 66, 138 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-summarization #size_categories-1K<n<10K #language-Azerbaijani #license-cc-by-sa-3.0 #arxiv-2203.02155 #region-us \n# Summary\n'databricks-dolly-15k' is an open source dataset of instruction-following records generated by thousands of Databricks employees in several \nof the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, \nclosed QA, generation, information extraction, open QA, and summarization.\n\nThis dataset can be used for any purpose, whether academic or commercial, under the terms of the \nCreative Commons Attribution-ShareAlike 3.0 Unported License.\n\nSupported Tasks: \n- Training LLMs\n- Synthetic Data Generation\n- Data Augmentation\n \nLanguages: English\nVersion: 1.0" ]
3998498d9cc4a317f47a5165d891955a84f30398
# Turkish TinyStories Large ### License: CDLA-Sharing-1.0 This is a translated version of the stories from [roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories) dataset.
SoAp9035/Turkish_TinyStories
[ "language:tr", "license:cdla-sharing-1.0", "region:us" ]
2023-10-29T07:45:44+00:00
{"language": ["tr"], "license": "cdla-sharing-1.0"}
2023-10-29T07:46:26+00:00
[]
[ "tr" ]
TAGS #language-Turkish #license-cdla-sharing-1.0 #region-us
# Turkish TinyStories Large ### License: CDLA-Sharing-1.0 This is a translated version of the stories from roneneldan/TinyStories dataset.
[ "# Turkish TinyStories Large", "### License: CDLA-Sharing-1.0\n\nThis is a translated version of the stories from roneneldan/TinyStories dataset." ]
[ "TAGS\n#language-Turkish #license-cdla-sharing-1.0 #region-us \n", "# Turkish TinyStories Large", "### License: CDLA-Sharing-1.0\n\nThis is a translated version of the stories from roneneldan/TinyStories dataset." ]
[ 23, 9, 35 ]
[ "passage: TAGS\n#language-Turkish #license-cdla-sharing-1.0 #region-us \n# Turkish TinyStories Large### License: CDLA-Sharing-1.0\n\nThis is a translated version of the stories from roneneldan/TinyStories dataset." ]
aadf52c7897a0b99ebcc0b5887952e43c10991ab
# Dataset Card for "amazon-product-data-filter" ## Dataset Description - **Homepage:** [τenai.io - AI Consulting](https://www.tenai.io/) - **Point of Contact:** [Iftach Arbel](mailto:[email protected]) ### Dataset Summary The Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more. **NOTICE:** This is a sample of the full [Amazon Product Dataset](https://huggingface.co/datasets/iarbel/amazon-product-data-filter), which contains 1K examples. Follow the link to gain access to the full dataset. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Each data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more. ### Data Fields - `asin`: Amazon Standard Identification Number. - `category`: The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on Amazon.com. - `img_url`: Main image URL from the product page. - `title`: Product title, as appears on the product page. - `feature-bullets`: Product feature-bullets list, as they appear on the product page. - `tech_data`: Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic). - `labels`: A processed instance of `feature-bullets` field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a `\n` seperator. - `tech_process`: A processed instance of `tech_data` field. The original tech data was filtered and transformed from a `(key, value)` structure to a natural language text. ### Data Splits The sample dataset has 20 train examples. For the full dataset cilck [here](https://huggingface.co/datasets/iarbel/amazon-product-data-filter). ## Dataset Creation ### Curation Rationale This dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from Amazom.com, parsed, processed, and filtered using various techniques (annotations, rule-based, models). ### Source Data #### Initial Data Collection and Normalization The data was obtained by collected raw HTML data from Amazom.com. ### Annotations The dataset does not contain any additional annotations. ### Personal and Sensitive Information There is no personal information in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset To the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk. ### Other Known Limitations The quality of product listings may vary, and may not be accurate. ## Additional Information ### Dataset Curators The dataset was collected and curated by [Iftach Arbel](mailto:[email protected]). ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{amazon_product_filter, author = {Iftach Arbel}, title = {Amazon Product Dataset Sample}, year = {2023}, publisher = {Huggingface}, journal = {Huggingface dataset}, howpublished = {https://huggingface.co/datasets/iarbel/amazon-product-data-sample}, } ```
iarbel/amazon-product-data-sample
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2023-10-29T07:48:21+00:00
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "asin", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "img_url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "feature-bullets", "sequence": "string"}, {"name": "tech_data", "sequence": {"sequence": "string"}}, {"name": "labels", "dtype": "string"}, {"name": "tech_process", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75797, "num_examples": 20}], "download_size": 62474, "dataset_size": 75797}}
2023-10-29T08:03:20+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-nc-4.0 #region-us
# Dataset Card for "amazon-product-data-filter" ## Dataset Description - Homepage: τenai.io - AI Consulting - Point of Contact: Iftach Arbel ### Dataset Summary The Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more. NOTICE: This is a sample of the full Amazon Product Dataset, which contains 1K examples. Follow the link to gain access to the full dataset. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances Each data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more. ### Data Fields - 'asin': Amazon Standard Identification Number. - 'category': The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on URL. - 'img_url': Main image URL from the product page. - 'title': Product title, as appears on the product page. - 'feature-bullets': Product feature-bullets list, as they appear on the product page. - 'tech_data': Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic). - 'labels': A processed instance of 'feature-bullets' field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a '\n' seperator. - 'tech_process': A processed instance of 'tech_data' field. The original tech data was filtered and transformed from a '(key, value)' structure to a natural language text. ### Data Splits The sample dataset has 20 train examples. For the full dataset cilck here. ## Dataset Creation ### Curation Rationale This dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from URL, parsed, processed, and filtered using various techniques (annotations, rule-based, models). ### Source Data #### Initial Data Collection and Normalization The data was obtained by collected raw HTML data from URL. ### Annotations The dataset does not contain any additional annotations. ### Personal and Sensitive Information There is no personal information in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset To the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk. ### Other Known Limitations The quality of product listings may vary, and may not be accurate. ## Additional Information ### Dataset Curators The dataset was collected and curated by Iftach Arbel. ### Licensing Information The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
[ "# Dataset Card for \"amazon-product-data-filter\"", "## Dataset Description\n\n- Homepage: τenai.io - AI Consulting\n- Point of Contact: Iftach Arbel", "### Dataset Summary\n\nThe Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more. \n\nNOTICE: This is a sample of the full Amazon Product Dataset, which contains 1K examples. Follow the link to gain access to the full dataset.", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nEach data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more.", "### Data Fields\n\n- 'asin': Amazon Standard Identification Number.\n- 'category': The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on URL.\n- 'img_url': Main image URL from the product page.\n- 'title': Product title, as appears on the product page.\n- 'feature-bullets': Product feature-bullets list, as they appear on the product page.\n- 'tech_data': Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic).\n- 'labels': A processed instance of 'feature-bullets' field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a '\\n' seperator.\n- 'tech_process': A processed instance of 'tech_data' field. The original tech data was filtered and transformed from a '(key, value)' structure to a natural language text.", "### Data Splits\n\nThe sample dataset has 20 train examples. For the full dataset cilck here.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from URL, parsed, processed, and filtered using various techniques (annotations, rule-based, models).", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data was obtained by collected raw HTML data from URL.", "### Annotations\n\nThe dataset does not contain any additional annotations.", "### Personal and Sensitive Information\n\nThere is no personal information in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nTo the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk.", "### Other Known Limitations\n\nThe quality of product listings may vary, and may not be accurate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was collected and curated by Iftach Arbel.", "### Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ "TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-nc-4.0 #region-us \n", "# Dataset Card for \"amazon-product-data-filter\"", "## Dataset Description\n\n- Homepage: τenai.io - AI Consulting\n- Point of Contact: Iftach Arbel", "### Dataset Summary\n\nThe Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more. \n\nNOTICE: This is a sample of the full Amazon Product Dataset, which contains 1K examples. Follow the link to gain access to the full dataset.", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nEach data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more.", "### Data Fields\n\n- 'asin': Amazon Standard Identification Number.\n- 'category': The product category. This field represents the search-string used to obtain the listing, it is not the product category as appears on URL.\n- 'img_url': Main image URL from the product page.\n- 'title': Product title, as appears on the product page.\n- 'feature-bullets': Product feature-bullets list, as they appear on the product page.\n- 'tech_data': Product technical data (material, style, etc.), as they appear on the product page. Structured as a list of tuples, where the first element is a feature (e.g. material) and the second element is a value (e.g. plastic).\n- 'labels': A processed instance of 'feature-bullets' field. The original feature-bullets were aligned to form a standard structure with a capitalized prefix, remove emojis, etc. Finally, the list items were concatenated to a single string with a '\\n' seperator.\n- 'tech_process': A processed instance of 'tech_data' field. The original tech data was filtered and transformed from a '(key, value)' structure to a natural language text.", "### Data Splits\n\nThe sample dataset has 20 train examples. For the full dataset cilck here.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was built to provide high-quality data in the e-commerce domain, and fine-tuning LLMs for specific tasks. Raw, unstractured data was collected from URL, parsed, processed, and filtered using various techniques (annotations, rule-based, models).", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data was obtained by collected raw HTML data from URL.", "### Annotations\n\nThe dataset does not contain any additional annotations.", "### Personal and Sensitive Information\n\nThere is no personal information in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nTo the best of our knowledge, there is no social impact for this dataset. The data is highly technical, and usage for product text-generation or classification does not pose a risk.", "### Other Known Limitations\n\nThe quality of product listings may vary, and may not be accurate.", "## Additional Information", "### Dataset Curators\n\nThe dataset was collected and curated by Iftach Arbel.", "### Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ 42, 15, 24, 92, 14, 6, 34, 287, 25, 5, 74, 4, 24, 17, 18, 8, 47, 23, 5, 21, 26 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-nc-4.0 #region-us \n# Dataset Card for \"amazon-product-data-filter\"## Dataset Description\n\n- Homepage: τenai.io - AI Consulting\n- Point of Contact: Iftach Arbel### Dataset Summary\n\nThe Amazon Product Dataset contains product listing data from the Amazon US website. It can be used for various NLP and classification tasks, such as text generation, product type classification, attribute extraction, image recognition and more. \n\nNOTICE: This is a sample of the full Amazon Product Dataset, which contains 1K examples. Follow the link to gain access to the full dataset.### Languages\n\nThe text in the dataset is in English.## Dataset Structure### Data Instances\n\nEach data point provides product information, such as ASIN (Amazon Standard Identification Number), title, feature-bullets, and more." ]
b8226144445350883327df41957943f49402505c
# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain [Paper](https://arxiv.org/abs/2310.05063) | [Code](https://github.com/SalesforceAIResearch/pretrain-time-series-cloudops) Datasets accompanying the paper "Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain". ## Quick Start ### azure_vm_traces_2017 ```python from datasets import load_dataset dataset = load_dataset('Salesforce/cloudops_tsf', 'azure_vm_traces_2017') print(dataset) DatasetDict({ train_test: Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'], num_rows: 17568 }) pretrain: Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'], num_rows: 159472 }) }) ``` ### borg_cluster_data_2011 ```python dataset = load_dataset('Salesforce/cloudops_tsf', 'borg_cluster_data_2011') print(dataset) DatasetDict({ train_test: Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'], num_rows: 11117 }) pretrain: Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'], num_rows: 143386 }) }) ``` ### alibaba_cluster_trace_2018 ```python dataset = load_dataset('Salesforce/cloudops_tsf', 'alibaba_cluster_trace_2018') print(dataset) DatasetDict({ train_test: Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'], num_rows: 6048 }) pretrain: Dataset({ features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'], num_rows: 58409 }) }) ``` ## Dataset Config ```python from datasets import load_dataset_builder config = load_dataset_builder('Salesforce/cloudops_tsf', 'azure_vm_traces_2017').config print(config) CloudOpsTSFConfig( name='azure_vm_traces_2017', version=1.0.0, data_dir=None, data_files=None, description='', prediction_length=48, freq='5T', stride=48, univariate=True, multivariate=False, optional_fields=( 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real' ), rolling_evaluations=12, test_split_date=Period('2016-12-13 15:55', '5T'), _feat_static_cat_cardinalities={ 'pretrain': ( ('vm_id', 177040), ('subscription_id', 5514), ('deployment_id', 15208), ('vm_category', 3) ), 'train_test': ( ('vm_id', 17568), ('subscription_id', 2713), ('deployment_id', 3255), ('vm_category', 3) ) }, target_dim=1, feat_static_real_dim=3, past_feat_dynamic_real_dim=2 ) ``` ```test_split_date``` is provided to achieve the same train-test split as given in the paper. This is essentially the date/time of ```rolling_evaluations * prediction_length``` time steps before the last time step in the dataset. Note that the pre-training dataset includes the test region, and thus should also be filtered before usage. ## Acknowledgements The datasets were processed from the following original sources. Please cite the original sources if you use the datasets. * Azure VM Traces 2017 * Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017. * https://github.com/Azure/AzurePublicDataset * Borg Cluster Data 2011 * John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at http://googleresearch.blogspot.com/2011/11/more-google-cluster-data.html. * https://github.com/google/cluster-data * Alibaba Cluster Trace 2018 * Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019. * https://github.com/alibaba/clusterdata ## Citation <pre> @article{woo2023pushing, title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain}, author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen}, journal={arXiv preprint arXiv:2310.05063}, year={2023} } </pre>
Salesforce/cloudops_tsf
[ "task_categories:time-series-forecasting", "size_categories:100M<n<1B", "license:cc-by-4.0", "arxiv:2310.05063", "region:us" ]
2023-10-29T07:51:30+00:00
{"license": "cc-by-4.0", "size_categories": ["100M<n<1B"], "task_categories": ["time-series-forecasting"], "pretty_name": "cloud"}
2023-12-04T14:18:37+00:00
[ "2310.05063" ]
[]
TAGS #task_categories-time-series-forecasting #size_categories-100M<n<1B #license-cc-by-4.0 #arxiv-2310.05063 #region-us
# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain Paper | Code Datasets accompanying the paper "Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain". ## Quick Start ### azure_vm_traces_2017 ### borg_cluster_data_2011 ### alibaba_cluster_trace_2018 ## Dataset Config is provided to achieve the same train-test split as given in the paper. This is essentially the date/time of time steps before the last time step in the dataset. Note that the pre-training dataset includes the test region, and thus should also be filtered before usage. ## Acknowledgements The datasets were processed from the following original sources. Please cite the original sources if you use the datasets. * Azure VM Traces 2017 * Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017. * URL * Borg Cluster Data 2011 * John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at URL * URL * Alibaba Cluster Trace 2018 * Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019. * URL <pre> @article{woo2023pushing, title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain}, author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen}, journal={arXiv preprint arXiv:2310.05063}, year={2023} } </pre>
[ "# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain\n\nPaper | Code\n\nDatasets accompanying the paper \"Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain\".", "## Quick Start", "### azure_vm_traces_2017", "### borg_cluster_data_2011", "### alibaba_cluster_trace_2018", "## Dataset Config\n\n is provided to achieve the same train-test split as given in the paper.\nThis is essentially the date/time of time steps before the last time step in the dataset.\nNote that the pre-training dataset includes the test region, and thus should also be filtered before usage.", "## Acknowledgements\nThe datasets were processed from the following original sources. Please cite the original sources if you use the datasets.\n* Azure VM Traces 2017\n * Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017.\n * URL\n\n* Borg Cluster Data 2011\n * John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at URL\n * URL\n\n* Alibaba Cluster Trace 2018\n * Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019.\n * URL\n\n<pre>\n@article{woo2023pushing,\n title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain},\n author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen},\n journal={arXiv preprint arXiv:2310.05063},\n year={2023}\n}\n</pre>" ]
[ "TAGS\n#task_categories-time-series-forecasting #size_categories-100M<n<1B #license-cc-by-4.0 #arxiv-2310.05063 #region-us \n", "# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain\n\nPaper | Code\n\nDatasets accompanying the paper \"Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain\".", "## Quick Start", "### azure_vm_traces_2017", "### borg_cluster_data_2011", "### alibaba_cluster_trace_2018", "## Dataset Config\n\n is provided to achieve the same train-test split as given in the paper.\nThis is essentially the date/time of time steps before the last time step in the dataset.\nNote that the pre-training dataset includes the test region, and thus should also be filtered before usage.", "## Acknowledgements\nThe datasets were processed from the following original sources. Please cite the original sources if you use the datasets.\n* Azure VM Traces 2017\n * Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017.\n * URL\n\n* Borg Cluster Data 2011\n * John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at URL\n * URL\n\n* Alibaba Cluster Trace 2018\n * Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019.\n * URL\n\n<pre>\n@article{woo2023pushing,\n title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain},\n author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen},\n journal={arXiv preprint arXiv:2310.05063},\n year={2023}\n}\n</pre>" ]
[ 51, 57, 3, 12, 10, 12, 66, 302 ]
[ "passage: TAGS\n#task_categories-time-series-forecasting #size_categories-100M<n<1B #license-cc-by-4.0 #arxiv-2310.05063 #region-us \n# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain\n\nPaper | Code\n\nDatasets accompanying the paper \"Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain\".## Quick Start### azure_vm_traces_2017### borg_cluster_data_2011### alibaba_cluster_trace_2018## Dataset Config\n\n is provided to achieve the same train-test split as given in the paper.\nThis is essentially the date/time of time steps before the last time step in the dataset.\nNote that the pre-training dataset includes the test region, and thus should also be filtered before usage." ]
7894f326465ed48dc203773aa2b030eaea30a2eb
# Dataset Card for "msmarco-corpus-en-id-parallel-sentences" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
carles-undergrad-thesis/msmarco-corpus-en-id-parallel-sentences
[ "region:us" ]
2023-10-29T08:27:41+00:00
{"dataset_info": {"features": [{"name": "text_en", "dtype": "string"}, {"name": "text_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6084997331, "num_examples": 8841823}], "download_size": 3258000585, "dataset_size": 6084997331}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T08:29:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "msmarco-corpus-en-id-parallel-sentences" More Information needed
[ "# Dataset Card for \"msmarco-corpus-en-id-parallel-sentences\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"msmarco-corpus-en-id-parallel-sentences\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"msmarco-corpus-en-id-parallel-sentences\"\n\nMore Information needed" ]
d7d70c7049b852607c65d41485a40cc3414880a8
# Dataset Card for "cv_13_zh_tw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/cv_13_zh_tw
[ "region:us" ]
2023-10-29T08:28:04+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "original", "path": "data/original-*"}, {"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "id", "dtype": "string"}], "splits": [{"name": "original", "num_bytes": 139589277.375, "num_examples": 4825}, {"name": "academicodec_hifi_16k_320d", "num_bytes": 613801785.0, "num_examples": 4825}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 613801785.0, "num_examples": 4825}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 920971321.0, "num_examples": 4825}, {"name": "audiodec_24k_320d", "num_bytes": 923777734.0, "num_examples": 4825}, {"name": "dac_16k", "num_bytes": 615114175.35, "num_examples": 4825}, {"name": "dac_24k", "num_bytes": 922388703.35, "num_examples": 4825}, {"name": "dac_44k", "num_bytes": 1694411815.1, "num_examples": 4825}, {"name": "encodec_24k", "num_bytes": 922388741.95, "num_examples": 4825}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 614791797.8, "num_examples": 4825}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 614791797.8, "num_examples": 4825}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 615114175.35, "num_examples": 4825}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 615114175.35, "num_examples": 4825}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 615114175.35, "num_examples": 4825}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 615114175.35, "num_examples": 4825}, {"name": "speech_tokenizer_16k", "num_bytes": 616432761.0, "num_examples": 4825}], "download_size": 9490245091, "dataset_size": 11672718396.125}}
2023-11-14T18:41:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cv_13_zh_tw" More Information needed
[ "# Dataset Card for \"cv_13_zh_tw\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cv_13_zh_tw\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cv_13_zh_tw\"\n\nMore Information needed" ]
1937c2daf2d36e6831ab4392392d9a4ed7d9cefc
# Dataset Card for "msmarco-query-en-id-parallel-sentences" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
carles-undergrad-thesis/msmarco-query-en-id-parallel-sentences
[ "region:us" ]
2023-10-29T08:32:16+00:00
{"dataset_info": {"features": [{"name": "text_en", "dtype": "string"}, {"name": "text_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39060054, "num_examples": 509919}], "download_size": 27839260, "dataset_size": 39060054}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T08:32:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "msmarco-query-en-id-parallel-sentences" More Information needed
[ "# Dataset Card for \"msmarco-query-en-id-parallel-sentences\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"msmarco-query-en-id-parallel-sentences\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"msmarco-query-en-id-parallel-sentences\"\n\nMore Information needed" ]
a26f86af3744cd6ffd650f69a42e7b58b4465653
# Dataset Card for "llm-MIDI" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
youyu0105/llm-MIDI
[ "region:us" ]
2023-10-29T08:33:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 50994814, "num_examples": 14606}], "download_size": 12039871, "dataset_size": 50994814}}
2023-10-29T08:33:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "llm-MIDI" More Information needed
[ "# Dataset Card for \"llm-MIDI\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"llm-MIDI\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"llm-MIDI\"\n\nMore Information needed" ]
eca773b0773b605c301a2c3608fda0da0a279a53
# Dataset Card for "KoRAE_original_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Cartinoe5930/KoRAE_original
[ "region:us" ]
2023-10-29T09:16:51+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 95068407, "num_examples": 63724}], "download_size": 48931987, "dataset_size": 95068407}}
2023-10-29T09:17:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "KoRAE_original_1" More Information needed
[ "# Dataset Card for \"KoRAE_original_1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"KoRAE_original_1\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"KoRAE_original_1\"\n\nMore Information needed" ]
c0144eee206ba31f965df45948898c9afba85933
# Dataset Card for "Bird Species" ## Dataset Summary The dataset encompasses 525 bird species with a total of 84,635 training images, 2,625 test images, and 2,625 validation images, all formatted as 224x224x3 color images in jpg. The dataset is sourced from Kaggle and can be found [here](https://www.kaggle.com/datasets/gpiosenka/100-bird-species). ### Update dataset To update the dataset to latest kaggle version run: ```bash python update.py ``` To update the metadata run: ```bash datasets-cli test bird-species-dataset.py --save_infos --all_configs ```
chriamue/bird-species-dataset
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "size_categories:1K<n<10K", "language:en", "license:cc0-1.0", "biology", "region:us" ]
2023-10-29T09:20:19+00:00
{"language": ["en"], "license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "Bird Species", "tags": ["biology"], "dataset_info": {"config_name": "bird_species_dataset", "features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "ABBOTTS BABBLER", "1": "ABBOTTS BOOBY", "2": "ABYSSINIAN GROUND HORNBILL", "3": "AFRICAN CROWNED CRANE", "4": "AFRICAN EMERALD CUCKOO", "5": "AFRICAN FIREFINCH", "6": "AFRICAN OYSTER CATCHER", "7": "AFRICAN PIED HORNBILL", "8": "AFRICAN PYGMY GOOSE", "9": "ALBATROSS", "10": "ALBERTS TOWHEE", "11": "ALEXANDRINE PARAKEET", "12": "ALPINE CHOUGH", "13": "ALTAMIRA YELLOWTHROAT", "14": "AMERICAN AVOCET", "15": "AMERICAN BITTERN", "16": "AMERICAN COOT", "17": "AMERICAN DIPPER", "18": "AMERICAN FLAMINGO", "19": "AMERICAN GOLDFINCH", "20": "AMERICAN KESTREL", "21": "AMERICAN PIPIT", "22": "AMERICAN REDSTART", "23": "AMERICAN ROBIN", "24": "AMERICAN WIGEON", "25": "AMETHYST WOODSTAR", "26": "ANDEAN GOOSE", "27": "ANDEAN LAPWING", "28": "ANDEAN SISKIN", "29": "ANHINGA", "30": "ANIANIAU", "31": "ANNAS HUMMINGBIRD", "32": "ANTBIRD", "33": "ANTILLEAN EUPHONIA", "34": "APAPANE", "35": "APOSTLEBIRD", "36": "ARARIPE MANAKIN", "37": "ASHY STORM PETREL", "38": "ASHY THRUSHBIRD", "39": "ASIAN CRESTED IBIS", "40": "ASIAN DOLLARD BIRD", "41": "ASIAN GREEN BEE EATER", "42": "ASIAN OPENBILL STORK", "43": "AUCKLAND SHAQ", "44": "AUSTRAL CANASTERO", "45": "AUSTRALASIAN FIGBIRD", "46": "AVADAVAT", "47": "AZARAS SPINETAIL", "48": "AZURE BREASTED PITTA", "49": "AZURE JAY", "50": "AZURE TANAGER", "51": "AZURE TIT", "52": "BAIKAL TEAL", "53": "BALD EAGLE", "54": "BALD IBIS", "55": "BALI STARLING", "56": "BALTIMORE ORIOLE", "57": "BANANAQUIT", "58": "BAND TAILED GUAN", "59": "BANDED BROADBILL", "60": "BANDED PITA", "61": "BANDED STILT", "62": "BAR-TAILED GODWIT", "63": "BARN OWL", "64": "BARN SWALLOW", "65": "BARRED PUFFBIRD", "66": "BARROWS GOLDENEYE", "67": "BAY-BREASTED WARBLER", "68": "BEARDED BARBET", "69": "BEARDED BELLBIRD", "70": "BEARDED REEDLING", "71": "BELTED KINGFISHER", "72": "BIRD OF PARADISE", "73": "BLACK AND YELLOW BROADBILL", "74": "BLACK BAZA", "75": "BLACK BREASTED PUFFBIRD", "76": "BLACK COCKATO", "77": "BLACK FACED SPOONBILL", "78": "BLACK FRANCOLIN", "79": "BLACK HEADED CAIQUE", "80": "BLACK NECKED STILT", "81": "BLACK SKIMMER", "82": "BLACK SWAN", "83": "BLACK TAIL CRAKE", "84": "BLACK THROATED BUSHTIT", "85": "BLACK THROATED HUET", "86": "BLACK THROATED WARBLER", "87": "BLACK VENTED SHEARWATER", "88": "BLACK VULTURE", "89": "BLACK-CAPPED CHICKADEE", "90": "BLACK-NECKED GREBE", "91": "BLACK-THROATED SPARROW", "92": "BLACKBURNIAM WARBLER", "93": "BLONDE CRESTED WOODPECKER", "94": "BLOOD PHEASANT", "95": "BLUE COAU", "96": "BLUE DACNIS", "97": "BLUE GRAY GNATCATCHER", "98": "BLUE GROSBEAK", "99": "BLUE GROUSE", "100": "BLUE HERON", "101": "BLUE MALKOHA", "102": "BLUE THROATED PIPING GUAN", "103": "BLUE THROATED TOUCANET", "104": "BOBOLINK", "105": "BORNEAN BRISTLEHEAD", "106": "BORNEAN LEAFBIRD", "107": "BORNEAN PHEASANT", "108": "BRANDT CORMARANT", "109": "BREWERS BLACKBIRD", "110": "BROWN CREPPER", "111": "BROWN HEADED COWBIRD", "112": "BROWN NOODY", "113": "BROWN THRASHER", "114": "BUFFLEHEAD", "115": "BULWERS PHEASANT", "116": "BURCHELLS COURSER", "117": "BUSH TURKEY", "118": "CAATINGA CACHOLOTE", "119": "CABOTS TRAGOPAN", "120": "CACTUS WREN", "121": "CALIFORNIA CONDOR", "122": "CALIFORNIA GULL", "123": "CALIFORNIA QUAIL", "124": "CAMPO FLICKER", "125": "CANARY", "126": "CANVASBACK", "127": "CAPE GLOSSY STARLING", "128": "CAPE LONGCLAW", "129": "CAPE MAY WARBLER", "130": "CAPE ROCK THRUSH", "131": "CAPPED HERON", "132": "CAPUCHINBIRD", "133": "CARMINE BEE-EATER", "134": "CASPIAN TERN", "135": "CASSOWARY", "136": "CEDAR WAXWING", "137": "CERULEAN WARBLER", "138": "CHARA DE COLLAR", "139": "CHATTERING LORY", "140": "CHESTNET BELLIED EUPHONIA", "141": "CHESTNUT WINGED CUCKOO", "142": "CHINESE BAMBOO PARTRIDGE", "143": "CHINESE POND HERON", "144": "CHIPPING SPARROW", "145": "CHUCAO TAPACULO", "146": "CHUKAR PARTRIDGE", "147": "CINNAMON ATTILA", "148": "CINNAMON FLYCATCHER", "149": "CINNAMON TEAL", "150": "CLARKS GREBE", "151": "CLARKS NUTCRACKER", "152": "COCK OF THE ROCK", "153": "COCKATOO", "154": "COLLARED ARACARI", "155": "COLLARED CRESCENTCHEST", "156": "COMMON FIRECREST", "157": "COMMON GRACKLE", "158": "COMMON HOUSE MARTIN", "159": "COMMON IORA", "160": "COMMON LOON", "161": "COMMON POORWILL", "162": "COMMON STARLING", "163": "COPPERSMITH BARBET", "164": "COPPERY TAILED COUCAL", "165": "CRAB PLOVER", "166": "CRANE HAWK", "167": "CREAM COLORED WOODPECKER", "168": "CRESTED AUKLET", "169": "CRESTED CARACARA", "170": "CRESTED COUA", "171": "CRESTED FIREBACK", "172": "CRESTED KINGFISHER", "173": "CRESTED NUTHATCH", "174": "CRESTED OROPENDOLA", "175": "CRESTED SERPENT EAGLE", "176": "CRESTED SHRIKETIT", "177": "CRESTED WOOD PARTRIDGE", "178": "CRIMSON CHAT", "179": "CRIMSON SUNBIRD", "180": "CROW", "181": "CUBAN TODY", "182": "CUBAN TROGON", "183": "CURL CRESTED ARACURI", "184": "D-ARNAUDS BARBET", "185": "DALMATIAN PELICAN", "186": "DARJEELING WOODPECKER", "187": "DARK EYED JUNCO", "188": "DAURIAN REDSTART", "189": "DEMOISELLE CRANE", "190": "DOUBLE BARRED FINCH", "191": "DOUBLE BRESTED CORMARANT", "192": "DOUBLE EYED FIG PARROT", "193": "DOWNY WOODPECKER", "194": "DUNLIN", "195": "DUSKY LORY", "196": "DUSKY ROBIN", "197": "EARED PITA", "198": "EASTERN BLUEBIRD", "199": "EASTERN BLUEBONNET", "200": "EASTERN GOLDEN WEAVER", "201": "EASTERN MEADOWLARK", "202": "EASTERN ROSELLA", "203": "EASTERN TOWEE", "204": "EASTERN WIP POOR WILL", "205": "EASTERN YELLOW ROBIN", "206": "ECUADORIAN HILLSTAR", "207": "EGYPTIAN GOOSE", "208": "ELEGANT TROGON", "209": "ELLIOTS PHEASANT", "210": "EMERALD TANAGER", "211": "EMPEROR PENGUIN", "212": "EMU", "213": "ENGGANO MYNA", "214": "EURASIAN BULLFINCH", "215": "EURASIAN GOLDEN ORIOLE", "216": "EURASIAN MAGPIE", "217": "EUROPEAN GOLDFINCH", "218": "EUROPEAN TURTLE DOVE", "219": "EVENING GROSBEAK", "220": "FAIRY BLUEBIRD", "221": "FAIRY PENGUIN", "222": "FAIRY TERN", "223": "FAN TAILED WIDOW", "224": "FASCIATED WREN", "225": "FIERY MINIVET", "226": "FIORDLAND PENGUIN", "227": "FIRE TAILLED MYZORNIS", "228": "FLAME BOWERBIRD", "229": "FLAME TANAGER", "230": "FOREST WAGTAIL", "231": "FRIGATE", "232": "FRILL BACK PIGEON", "233": "GAMBELS QUAIL", "234": "GANG GANG COCKATOO", "235": "GILA WOODPECKER", "236": "GILDED FLICKER", "237": "GLOSSY IBIS", "238": "GO AWAY BIRD", "239": "GOLD WING WARBLER", "240": "GOLDEN BOWER BIRD", "241": "GOLDEN CHEEKED WARBLER", "242": "GOLDEN CHLOROPHONIA", "243": "GOLDEN EAGLE", "244": "GOLDEN PARAKEET", "245": "GOLDEN PHEASANT", "246": "GOLDEN PIPIT", "247": "GOULDIAN FINCH", "248": "GRANDALA", "249": "GRAY CATBIRD", "250": "GRAY KINGBIRD", "251": "GRAY PARTRIDGE", "252": "GREAT ARGUS", "253": "GREAT GRAY OWL", "254": "GREAT JACAMAR", "255": "GREAT KISKADEE", "256": "GREAT POTOO", "257": "GREAT TINAMOU", "258": "GREAT XENOPS", "259": "GREATER PEWEE", "260": "GREATER PRAIRIE CHICKEN", "261": "GREATOR SAGE GROUSE", "262": "GREEN BROADBILL", "263": "GREEN JAY", "264": "GREEN MAGPIE", "265": "GREEN WINGED DOVE", "266": "GREY CUCKOOSHRIKE", "267": "GREY HEADED CHACHALACA", "268": "GREY HEADED FISH EAGLE", "269": "GREY PLOVER", "270": "GROVED BILLED ANI", "271": "GUINEA TURACO", "272": "GUINEAFOWL", "273": "GURNEYS PITTA", "274": "GYRFALCON", "275": "HAMERKOP", "276": "HARLEQUIN DUCK", "277": "HARLEQUIN QUAIL", "278": "HARPY EAGLE", "279": "HAWAIIAN GOOSE", "280": "HAWFINCH", "281": "HELMET VANGA", "282": "HEPATIC TANAGER", "283": "HIMALAYAN BLUETAIL", "284": "HIMALAYAN MONAL", "285": "HOATZIN", "286": "HOODED MERGANSER", "287": "HOOPOES", "288": "HORNED GUAN", "289": "HORNED LARK", "290": "HORNED SUNGEM", "291": "HOUSE FINCH", "292": "HOUSE SPARROW", "293": "HYACINTH MACAW", "294": "IBERIAN MAGPIE", "295": "IBISBILL", "296": "IMPERIAL SHAQ", "297": "INCA TERN", "298": "INDIAN BUSTARD", "299": "INDIAN PITTA", "300": "INDIAN ROLLER", "301": "INDIAN VULTURE", "302": "INDIGO BUNTING", "303": "INDIGO FLYCATCHER", "304": "INLAND DOTTEREL", "305": "IVORY BILLED ARACARI", "306": "IVORY GULL", "307": "IWI", "308": "JABIRU", "309": "JACK SNIPE", "310": "JACOBIN PIGEON", "311": "JANDAYA PARAKEET", "312": "JAPANESE ROBIN", "313": "JAVA SPARROW", "314": "JOCOTOCO ANTPITTA", "315": "KAGU", "316": "KAKAPO", "317": "KILLDEAR", "318": "KING EIDER", "319": "KING VULTURE", "320": "KIWI", "321": "KNOB BILLED DUCK", "322": "KOOKABURRA", "323": "LARK BUNTING", "324": "LAUGHING GULL", "325": "LAZULI BUNTING", "326": "LESSER ADJUTANT", "327": "LILAC ROLLER", "328": "LIMPKIN", "329": "LITTLE AUK", "330": "LOGGERHEAD SHRIKE", "331": "LONG-EARED OWL", "332": "LOONEY BIRDS", "333": "LUCIFER HUMMINGBIRD", "334": "MAGPIE GOOSE", "335": "MALABAR HORNBILL", "336": "MALACHITE KINGFISHER", "337": "MALAGASY WHITE EYE", "338": "MALEO", "339": "MALLARD DUCK", "340": "MANDRIN DUCK", "341": "MANGROVE CUCKOO", "342": "MARABOU STORK", "343": "MASKED BOBWHITE", "344": "MASKED BOOBY", "345": "MASKED LAPWING", "346": "MCKAYS BUNTING", "347": "MERLIN", "348": "MIKADO PHEASANT", "349": "MILITARY MACAW", "350": "MOURNING DOVE", "351": "MYNA", "352": "NICOBAR PIGEON", "353": "NOISY FRIARBIRD", "354": "NORTHERN BEARDLESS TYRANNULET", "355": "NORTHERN CARDINAL", "356": "NORTHERN FLICKER", "357": "NORTHERN FULMAR", "358": "NORTHERN GANNET", "359": "NORTHERN GOSHAWK", "360": "NORTHERN JACANA", "361": "NORTHERN MOCKINGBIRD", "362": "NORTHERN PARULA", "363": "NORTHERN RED BISHOP", "364": "NORTHERN SHOVELER", "365": "OCELLATED TURKEY", "366": "OILBIRD", "367": "OKINAWA RAIL", "368": "ORANGE BREASTED TROGON", "369": "ORANGE BRESTED BUNTING", "370": "ORIENTAL BAY OWL", "371": "ORNATE HAWK EAGLE", "372": "OSPREY", "373": "OSTRICH", "374": "OVENBIRD", "375": "OYSTER CATCHER", "376": "PAINTED BUNTING", "377": "PALILA", "378": "PALM NUT VULTURE", "379": "PARADISE TANAGER", "380": "PARAKETT AUKLET", "381": "PARUS MAJOR", "382": "PATAGONIAN SIERRA FINCH", "383": "PEACOCK", "384": "PEREGRINE FALCON", "385": "PHAINOPEPLA", "386": "PHILIPPINE EAGLE", "387": "PINK ROBIN", "388": "PLUSH CRESTED JAY", "389": "POMARINE JAEGER", "390": "PUFFIN", "391": "PUNA TEAL", "392": "PURPLE FINCH", "393": "PURPLE GALLINULE", "394": "PURPLE MARTIN", "395": "PURPLE SWAMPHEN", "396": "PYGMY KINGFISHER", "397": "PYRRHULOXIA", "398": "QUETZAL", "399": "RAINBOW LORIKEET", "400": "RAZORBILL", "401": "RED BEARDED BEE EATER", "402": "RED BELLIED PITTA", "403": "RED BILLED TROPICBIRD", "404": "RED BROWED FINCH", "405": "RED CROSSBILL", "406": "RED FACED CORMORANT", "407": "RED FACED WARBLER", "408": "RED FODY", "409": "RED HEADED DUCK", "410": "RED HEADED WOODPECKER", "411": "RED KNOT", "412": "RED LEGGED HONEYCREEPER", "413": "RED NAPED TROGON", "414": "RED SHOULDERED HAWK", "415": "RED TAILED HAWK", "416": "RED TAILED THRUSH", "417": "RED WINGED BLACKBIRD", "418": "RED WISKERED BULBUL", "419": "REGENT BOWERBIRD", "420": "RING-NECKED PHEASANT", "421": "ROADRUNNER", "422": "ROCK DOVE", "423": "ROSE BREASTED COCKATOO", "424": "ROSE BREASTED GROSBEAK", "425": "ROSEATE SPOONBILL", "426": "ROSY FACED LOVEBIRD", "427": "ROUGH LEG BUZZARD", "428": "ROYAL FLYCATCHER", "429": "RUBY CROWNED KINGLET", "430": "RUBY THROATED HUMMINGBIRD", "431": "RUDDY SHELDUCK", "432": "RUDY KINGFISHER", "433": "RUFOUS KINGFISHER", "434": "RUFOUS TREPE", "435": "RUFUOS MOTMOT", "436": "SAMATRAN THRUSH", "437": "SAND MARTIN", "438": "SANDHILL CRANE", "439": "SATYR TRAGOPAN", "440": "SAYS PHOEBE", "441": "SCARLET CROWNED FRUIT DOVE", "442": "SCARLET FACED LIOCICHLA", "443": "SCARLET IBIS", "444": "SCARLET MACAW", "445": "SCARLET TANAGER", "446": "SHOEBILL", "447": "SHORT BILLED DOWITCHER", "448": "SMITHS LONGSPUR", "449": "SNOW GOOSE", "450": "SNOW PARTRIDGE", "451": "SNOWY EGRET", "452": "SNOWY OWL", "453": "SNOWY PLOVER", "454": "SNOWY SHEATHBILL", "455": "SORA", "456": "SPANGLED COTINGA", "457": "SPLENDID WREN", "458": "SPOON BILED SANDPIPER", "459": "SPOTTED CATBIRD", "460": "SPOTTED WHISTLING DUCK", "461": "SQUACCO HERON", "462": "SRI LANKA BLUE MAGPIE", "463": "STEAMER DUCK", "464": "STORK BILLED KINGFISHER", "465": "STRIATED CARACARA", "466": "STRIPED OWL", "467": "STRIPPED MANAKIN", "468": "STRIPPED SWALLOW", "469": "SUNBITTERN", "470": "SUPERB STARLING", "471": "SURF SCOTER", "472": "SWINHOES PHEASANT", "473": "TAILORBIRD", "474": "TAIWAN MAGPIE", "475": "TAKAHE", "476": "TASMANIAN HEN", "477": "TAWNY FROGMOUTH", "478": "TEAL DUCK", "479": "TIT MOUSE", "480": "TOUCHAN", "481": "TOWNSENDS WARBLER", "482": "TREE SWALLOW", "483": "TRICOLORED BLACKBIRD", "484": "TROPICAL KINGBIRD", "485": "TRUMPTER SWAN", "486": "TURKEY VULTURE", "487": "TURQUOISE MOTMOT", "488": "UMBRELLA BIRD", "489": "VARIED THRUSH", "490": "VEERY", "491": "VENEZUELIAN TROUPIAL", "492": "VERDIN", "493": "VERMILION FLYCATHER", "494": "VICTORIA CROWNED PIGEON", "495": "VIOLET BACKED STARLING", "496": "VIOLET CUCKOO", "497": "VIOLET GREEN SWALLOW", "498": "VIOLET TURACO", "499": "VISAYAN HORNBILL", "500": "VULTURINE GUINEAFOWL", "501": "WALL CREAPER", "502": "WATTLED CURASSOW", "503": "WATTLED LAPWING", "504": "WHIMBREL", "505": "WHITE BREASTED WATERHEN", "506": "WHITE BROWED CRAKE", "507": "WHITE CHEEKED TURACO", "508": "WHITE CRESTED HORNBILL", "509": "WHITE EARED HUMMINGBIRD", "510": "WHITE NECKED RAVEN", "511": "WHITE TAILED TROPIC", "512": "WHITE THROATED BEE EATER", "513": "WILD TURKEY", "514": "WILLOW PTARMIGAN", "515": "WILSONS BIRD OF PARADISE", "516": "WOOD DUCK", "517": "WOOD THRUSH", "518": "WOODLAND KINGFISHER", "519": "WRENTIT", "520": "YELLOW BELLIED FLOWERPECKER", "521": "YELLOW BREASTED CHAT", "522": "YELLOW CACIQUE", "523": "YELLOW HEADED BLACKBIRD", "524": "ZEBRA DOVE"}}}}], "splits": [{"name": "train", "num_bytes": 1912154520, "num_examples": 84635}, {"name": "validation", "num_bytes": 60616321, "num_examples": 2625}, {"name": "test", "num_bytes": 60965656, "num_examples": 2625}], "download_size": 1984870735, "dataset_size": 2033736497}}
2023-11-12T10:46:34+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #size_categories-1K<n<10K #language-English #license-cc0-1.0 #biology #region-us
# Dataset Card for "Bird Species" ## Dataset Summary The dataset encompasses 525 bird species with a total of 84,635 training images, 2,625 test images, and 2,625 validation images, all formatted as 224x224x3 color images in jpg. The dataset is sourced from Kaggle and can be found here. ### Update dataset To update the dataset to latest kaggle version run: To update the metadata run:
[ "# Dataset Card for \"Bird Species\"", "## Dataset Summary\n\nThe dataset encompasses 525 bird species with a total of 84,635 training images, 2,625 test images, and 2,625 validation images, all formatted as 224x224x3 color images in jpg.\nThe dataset is sourced from Kaggle and can be found here.", "### Update dataset\n\nTo update the dataset to latest kaggle version run:\n\n\n\nTo update the metadata run:" ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #size_categories-1K<n<10K #language-English #license-cc0-1.0 #biology #region-us \n", "# Dataset Card for \"Bird Species\"", "## Dataset Summary\n\nThe dataset encompasses 525 bird species with a total of 84,635 training images, 2,625 test images, and 2,625 validation images, all formatted as 224x224x3 color images in jpg.\nThe dataset is sourced from Kaggle and can be found here.", "### Update dataset\n\nTo update the dataset to latest kaggle version run:\n\n\n\nTo update the metadata run:" ]
[ 58, 11, 69, 24 ]
[ "passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #size_categories-1K<n<10K #language-English #license-cc0-1.0 #biology #region-us \n# Dataset Card for \"Bird Species\"## Dataset Summary\n\nThe dataset encompasses 525 bird species with a total of 84,635 training images, 2,625 test images, and 2,625 validation images, all formatted as 224x224x3 color images in jpg.\nThe dataset is sourced from Kaggle and can be found here.### Update dataset\n\nTo update the dataset to latest kaggle version run:\n\n\n\nTo update the metadata run:" ]
0bd3a078135a125ec8d4df68f4bcf4bf498f3d11
# GPT-BOOKSUM GPT-BookSum is a **hierarchical summarization dataset** based on the story passages from the [BookSum]([GitHub - salesforce/booksum](https://github.com/salesforce/booksum)) dataset. The dataset is proposed in **Improving Pacing in Long-Form Story Planning** (EMNLP23). In the paper, we use GPT-BookSum to train a concreteness evaluator, which is further utilized to improve pacing in story outlining and generation. The summaries are written by ChatGPT (gpt-3.5-turbo-0301); thus, we obtain a uniform style. (*We initially used BookSum's summaries, but found that different-level summaries were often written in different styles, e.g., chapter-level summaries are often bullet-point lists.*) ### Data Instances an example looks as follows: ```json {"level": "chapter", "turbo_len": 70, "compression ratio": 0.034, "roberta_len": 74, "sub_index": 6, "text": "Grushenka is glad to see Alyosha and sits on his knee, while Rakitin tries to join in their conversation. Grushenka mentions that she's expecting a message from her officer, and gives Rakitin champagne when he asks for it. They all have a conversation about various things including the death of Father Zossima.", "rawtext_turbo_len": 2059, "index": {"bid": "28054", "is_aggregate": true, "source": "cliffnotes", "chapter_path": "all_chapterized_books/28054-chapters/book_vii.txt", "summary_path": "finished_summaries/cliffnotes/The Brothers Karamazov/section_10_part_0.txt", "book_id": "The Brothers Karamazov.book vii.chapter i-chapter iv", "summary_id": "book vii"}} ``` - `level` can be *chapter* or *paragraph*. (If you also want *book*-level, you can directly get it from the BookSum dataset.) - `text`, the summary. - `turbo_len`, number of tokens in summary under ChatGPT tokenizer. - `compression ratio`, [number of tokens in summary / number of tokens in raw texts], a smaller number means more compressed. - `roberta_len`, number of tokens in summary under RoBERTa tokenizer. - `sub_index`, if the raw text is longer than 4,096 tokens (the max input length of ChatGPT), we will chunk it into sub-chapters. sub_index is the index of the sub-chapter. - `rawtext_turbo_len`, number of tokens in the raw text under ChatGPT tokenizer. - `index`, the index of raw text in the BookSum dataset. ### Dataset Statics | | ***Chapter-Level*** | | | | ***Paragraph-Level***| | | | | --------- | -------- | --------------- | ----------- | ------------- | -------- | --------------- | --------------- | ------------- | | **Split** | **Size** | **Summary Len** | **Raw Len** | **Raw / Sum** | **Size** | **Summary Len** | **Raw Len** | **Raw / Sum** | | *Train* | 23,564 | 133.7 | 5450.7 | 40.77 | 162,122* | 58.6 | 71.6 | 1.22 | | *Val* | 3,086 | 134.2 | 4607.8 | 34.34 | 58,648 | 56.6 | 63.7 | 1.13 | | *Test* | 3,397 | 135.1 | 5440.8 | 40.27 | 59,965 | 59.5 | 76.4 | 1.28 | Table 1: GPT-BookSum dataset statistics for chapter-level and paragraph-level summaries: number of passage summary pairs, average token count of summaries and raw texts, and the ratio of total token count in the raw texts compared to after summarizing. Training, validation, and test sets are partitioned at the book level. *The checkpoint-1 (mentioned and used in the paper) of the paragraph-level train set is 162,122 items. Now, we've finished the full paragraph-level train set, which is 444,643 items. ### File Structure Two folders, "chapter-" and "paragraph-," contain corresponding entries, and each of them contains separate jsonline files for train, val, and test splits. ### Downstream: Build a Pairwise Dataset to Train a Concreteness Evaluator We further use GPT-BookSum to train our concreteness evaluator M. We construct training pairs $(t_0, t_1)$ as follows: 1. Sample summaries from GPT-BookSum, which have not yet been used for training, and pair them by top mean embedding similarity using Contriever (Izacard et al., 2021). 2. With a 50% probability, truncate the longer summary to roughly the length of the shorter one. Otherwise, truncate both summaries to the same token length, randomly chosen on a log scale from 25 to 180. Sentence boundaries are respected whenever truncating. By matching topic and length within a training pair $(t_0, t_1)$, we encourage M to focus on the actual vagueness or concreteness of the exposition. A workable input for model construction is "$t_0$ </s> $t_1$" using a separator token </s>. As chapter-level summaries are dramatically more compressed than paragraph-level summaries (Table 1), we label the chapter-level summary as vaguer when paired with a paragraph-level summary. The label is 0.5 if t0 and t1 are same-level summaries; we found including 0.5 labels to be empirically beneficial. ### Prompt Design for Summarization The prompt design for summarization follows instructions from Super-NaturalInstructions (Wang et al., 2022). Table 2 shows the prompt. ```json {“role”: “user”, “content”: “Write a summary for the paragraph.\n\n”} {“role”: “user”, “content”: “Paragraph: {Input Raw Text}”} {“role”: “assistant”, “content”: “Summary: In this paragraph, the main story is as follows.”} ``` Table 2: Prompt for GPT-3.5-turbo-0301 to summarize for GPT-BookSum. Since GPT-3.5-turbo-0301 has a context window limit of 4,097 tokens, sometimes even a single chapter will exceed the limit. For such texts, we split them into sub-parts at sentence boundaries. To avoid potential artifacts that may allow the evaluator to trivially discriminate summary-level texts, we prevent summaries from using words indicating a level of granularity, such as “chapter,” “paragraph,” etc. We also delete the titles of chapters and books in the data to mitigate the likelihood of the language model making inferences based on previously memorized knowledge. ### Citation [TODO]
ZachW/GPT-BookSum
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "license:mit", "story", "region:us" ]
2023-10-29T09:33:13+00:00
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["summarization", "text-generation", "text-classification"], "pretty_name": "GPT-BookSum", "tags": ["story"]}
2023-10-29T14:22:23+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text-generation #task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #story #region-us
GPT-BOOKSUM =========== GPT-BookSum is a hierarchical summarization dataset based on the story passages from the BookSum) dataset. The dataset is proposed in Improving Pacing in Long-Form Story Planning (EMNLP23). In the paper, we use GPT-BookSum to train a concreteness evaluator, which is further utilized to improve pacing in story outlining and generation. The summaries are written by ChatGPT (gpt-3.5-turbo-0301); thus, we obtain a uniform style. (*We initially used BookSum's summaries, but found that different-level summaries were often written in different styles, e.g., chapter-level summaries are often bullet-point lists.*) ### Data Instances an example looks as follows: * 'level' can be *chapter* or *paragraph*. (If you also want *book*-level, you can directly get it from the BookSum dataset.) * 'text', the summary. * 'turbo\_len', number of tokens in summary under ChatGPT tokenizer. * 'compression ratio', [number of tokens in summary / number of tokens in raw texts], a smaller number means more compressed. * 'roberta\_len', number of tokens in summary under RoBERTa tokenizer. * 'sub\_index', if the raw text is longer than 4,096 tokens (the max input length of ChatGPT), we will chunk it into sub-chapters. sub\_index is the index of the sub-chapter. * 'rawtext\_turbo\_len', number of tokens in the raw text under ChatGPT tokenizer. * 'index', the index of raw text in the BookSum dataset. ### Dataset Statics Table 1: GPT-BookSum dataset statistics for chapter-level and paragraph-level summaries: number of passage summary pairs, average token count of summaries and raw texts, and the ratio of total token count in the raw texts compared to after summarizing. Training, validation, and test sets are partitioned at the book level. \*The checkpoint-1 (mentioned and used in the paper) of the paragraph-level train set is 162,122 items. Now, we've finished the full paragraph-level train set, which is 444,643 items. ### File Structure Two folders, "chapter-" and "paragraph-," contain corresponding entries, and each of them contains separate jsonline files for train, val, and test splits. ### Downstream: Build a Pairwise Dataset to Train a Concreteness Evaluator We further use GPT-BookSum to train our concreteness evaluator M. We construct training pairs $(t\_0, t\_1)$ as follows: 1. Sample summaries from GPT-BookSum, which have not yet been used for training, and pair them by top mean embedding similarity using Contriever (Izacard et al., 2021). 2. With a 50% probability, truncate the longer summary to roughly the length of the shorter one. Otherwise, truncate both summaries to the same token length, randomly chosen on a log scale from 25 to 180. Sentence boundaries are respected whenever truncating. By matching topic and length within a training pair $(t\_0, t\_1)$, we encourage M to focus on the actual vagueness or concreteness of the exposition. A workable input for model construction is "$t\_0$ $t\_1$" using a separator token . As chapter-level summaries are dramatically more compressed than paragraph-level summaries (Table 1), we label the chapter-level summary as vaguer when paired with a paragraph-level summary. The label is 0.5 if t0 and t1 are same-level summaries; we found including 0.5 labels to be empirically beneficial. ### Prompt Design for Summarization The prompt design for summarization follows instructions from Super-NaturalInstructions (Wang et al., 2022). Table 2 shows the prompt. Table 2: Prompt for GPT-3.5-turbo-0301 to summarize for GPT-BookSum. Since GPT-3.5-turbo-0301 has a context window limit of 4,097 tokens, sometimes even a single chapter will exceed the limit. For such texts, we split them into sub-parts at sentence boundaries. To avoid potential artifacts that may allow the evaluator to trivially discriminate summary-level texts, we prevent summaries from using words indicating a level of granularity, such as “chapter,” “paragraph,” etc. We also delete the titles of chapters and books in the data to mitigate the likelihood of the language model making inferences based on previously memorized knowledge. [TODO]
[ "### Data Instances\n\n\nan example looks as follows:\n\n\n* 'level' can be *chapter* or *paragraph*. (If you also want *book*-level, you can directly get it from the BookSum dataset.)\n* 'text', the summary.\n* 'turbo\\_len', number of tokens in summary under ChatGPT tokenizer.\n* 'compression ratio', [number of tokens in summary / number of tokens in raw texts], a smaller number means more compressed.\n* 'roberta\\_len', number of tokens in summary under RoBERTa tokenizer.\n* 'sub\\_index', if the raw text is longer than 4,096 tokens (the max input length of ChatGPT), we will chunk it into sub-chapters. sub\\_index is the index of the sub-chapter.\n* 'rawtext\\_turbo\\_len', number of tokens in the raw text under ChatGPT tokenizer.\n* 'index', the index of raw text in the BookSum dataset.", "### Dataset Statics\n\n\n\nTable 1: GPT-BookSum dataset statistics for chapter-level and paragraph-level summaries: number of passage summary pairs, average token count of summaries and raw texts, and the ratio of total token count in the raw texts\ncompared to after summarizing. Training, validation, and test sets are partitioned at the book level.\n\n\n\\*The checkpoint-1 (mentioned and used in the paper) of the paragraph-level train set is 162,122 items. Now, we've finished the full paragraph-level train set, which is 444,643 items.", "### File Structure\n\n\nTwo folders, \"chapter-\" and \"paragraph-,\" contain corresponding entries, and each of them contains separate jsonline files for train, val, and test splits.", "### Downstream: Build a Pairwise Dataset to Train a Concreteness Evaluator\n\n\nWe further use GPT-BookSum to train our concreteness evaluator M. We construct training pairs $(t\\_0, t\\_1)$ as follows:\n\n\n1. Sample summaries from GPT-BookSum, which have not yet been used for training, and pair them by top mean embedding similarity using Contriever (Izacard et al., 2021).\n2. With a 50% probability, truncate the longer summary to roughly the length of the shorter one. Otherwise, truncate both summaries to the same token length, randomly chosen on a log scale from 25 to 180. Sentence boundaries are respected whenever truncating.\n\n\nBy matching topic and length within a training pair $(t\\_0, t\\_1)$, we encourage M to focus on the actual vagueness or concreteness of the exposition.\n\n\nA workable input for model construction is \"$t\\_0$ $t\\_1$\" using a separator token . As chapter-level summaries are dramatically more compressed than paragraph-level summaries (Table 1), we label the chapter-level summary as vaguer when paired with a paragraph-level summary. The label is 0.5 if t0 and t1 are same-level summaries; we found including 0.5 labels to be empirically beneficial.", "### Prompt Design for Summarization\n\n\nThe prompt design for summarization follows instructions from Super-NaturalInstructions (Wang et al., 2022). Table 2 shows the prompt.\n\n\nTable 2: Prompt for GPT-3.5-turbo-0301 to summarize for GPT-BookSum.\n\n\nSince GPT-3.5-turbo-0301 has a context window limit of 4,097 tokens, sometimes even a single chapter will exceed the limit. For such texts, we split them into sub-parts at sentence boundaries.\n\n\nTo avoid potential artifacts that may allow the evaluator to trivially discriminate summary-level texts, we prevent summaries from using words indicating a level of granularity, such as “chapter,” “paragraph,” etc. We also delete the titles of chapters and books in the data to mitigate the likelihood of the language model making inferences based on previously memorized knowledge.\n\n\n[TODO]" ]
[ "TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #story #region-us \n", "### Data Instances\n\n\nan example looks as follows:\n\n\n* 'level' can be *chapter* or *paragraph*. (If you also want *book*-level, you can directly get it from the BookSum dataset.)\n* 'text', the summary.\n* 'turbo\\_len', number of tokens in summary under ChatGPT tokenizer.\n* 'compression ratio', [number of tokens in summary / number of tokens in raw texts], a smaller number means more compressed.\n* 'roberta\\_len', number of tokens in summary under RoBERTa tokenizer.\n* 'sub\\_index', if the raw text is longer than 4,096 tokens (the max input length of ChatGPT), we will chunk it into sub-chapters. sub\\_index is the index of the sub-chapter.\n* 'rawtext\\_turbo\\_len', number of tokens in the raw text under ChatGPT tokenizer.\n* 'index', the index of raw text in the BookSum dataset.", "### Dataset Statics\n\n\n\nTable 1: GPT-BookSum dataset statistics for chapter-level and paragraph-level summaries: number of passage summary pairs, average token count of summaries and raw texts, and the ratio of total token count in the raw texts\ncompared to after summarizing. Training, validation, and test sets are partitioned at the book level.\n\n\n\\*The checkpoint-1 (mentioned and used in the paper) of the paragraph-level train set is 162,122 items. Now, we've finished the full paragraph-level train set, which is 444,643 items.", "### File Structure\n\n\nTwo folders, \"chapter-\" and \"paragraph-,\" contain corresponding entries, and each of them contains separate jsonline files for train, val, and test splits.", "### Downstream: Build a Pairwise Dataset to Train a Concreteness Evaluator\n\n\nWe further use GPT-BookSum to train our concreteness evaluator M. We construct training pairs $(t\\_0, t\\_1)$ as follows:\n\n\n1. Sample summaries from GPT-BookSum, which have not yet been used for training, and pair them by top mean embedding similarity using Contriever (Izacard et al., 2021).\n2. With a 50% probability, truncate the longer summary to roughly the length of the shorter one. Otherwise, truncate both summaries to the same token length, randomly chosen on a log scale from 25 to 180. Sentence boundaries are respected whenever truncating.\n\n\nBy matching topic and length within a training pair $(t\\_0, t\\_1)$, we encourage M to focus on the actual vagueness or concreteness of the exposition.\n\n\nA workable input for model construction is \"$t\\_0$ $t\\_1$\" using a separator token . As chapter-level summaries are dramatically more compressed than paragraph-level summaries (Table 1), we label the chapter-level summary as vaguer when paired with a paragraph-level summary. The label is 0.5 if t0 and t1 are same-level summaries; we found including 0.5 labels to be empirically beneficial.", "### Prompt Design for Summarization\n\n\nThe prompt design for summarization follows instructions from Super-NaturalInstructions (Wang et al., 2022). Table 2 shows the prompt.\n\n\nTable 2: Prompt for GPT-3.5-turbo-0301 to summarize for GPT-BookSum.\n\n\nSince GPT-3.5-turbo-0301 has a context window limit of 4,097 tokens, sometimes even a single chapter will exceed the limit. For such texts, we split them into sub-parts at sentence boundaries.\n\n\nTo avoid potential artifacts that may allow the evaluator to trivially discriminate summary-level texts, we prevent summaries from using words indicating a level of granularity, such as “chapter,” “paragraph,” etc. We also delete the titles of chapters and books in the data to mitigate the likelihood of the language model making inferences based on previously memorized knowledge.\n\n\n[TODO]" ]
[ 61, 246, 137, 49, 319, 216 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #story #region-us \n### Data Instances\n\n\nan example looks as follows:\n\n\n* 'level' can be *chapter* or *paragraph*. (If you also want *book*-level, you can directly get it from the BookSum dataset.)\n* 'text', the summary.\n* 'turbo\\_len', number of tokens in summary under ChatGPT tokenizer.\n* 'compression ratio', [number of tokens in summary / number of tokens in raw texts], a smaller number means more compressed.\n* 'roberta\\_len', number of tokens in summary under RoBERTa tokenizer.\n* 'sub\\_index', if the raw text is longer than 4,096 tokens (the max input length of ChatGPT), we will chunk it into sub-chapters. sub\\_index is the index of the sub-chapter.\n* 'rawtext\\_turbo\\_len', number of tokens in the raw text under ChatGPT tokenizer.\n* 'index', the index of raw text in the BookSum dataset.### Dataset Statics\n\n\n\nTable 1: GPT-BookSum dataset statistics for chapter-level and paragraph-level summaries: number of passage summary pairs, average token count of summaries and raw texts, and the ratio of total token count in the raw texts\ncompared to after summarizing. Training, validation, and test sets are partitioned at the book level.\n\n\n\\*The checkpoint-1 (mentioned and used in the paper) of the paragraph-level train set is 162,122 items. Now, we've finished the full paragraph-level train set, which is 444,643 items.### File Structure\n\n\nTwo folders, \"chapter-\" and \"paragraph-,\" contain corresponding entries, and each of them contains separate jsonline files for train, val, and test splits." ]
01e1ab6c68be53f60ec176b52cc3170fb6f920a6
# ak-fandom-20230821-raw A dataset generated from [the dump](https://arknights.fandom.com/wiki/Special:Statistics) of [Arknights Fandom wiki](https://arknights.fandom.com/wiki/Arknights_Wiki).
isek-ai/ak-fandom-20230821-raw
[ "size_categories:10K<n<100K", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-10-29T09:44:09+00:00
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "pretty_name": "Arknights Fandom Wiki (Raw) 20230821", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41839104, "num_examples": 10937}], "download_size": 20610229, "dataset_size": 41839104}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T11:15:20+00:00
[]
[ "en" ]
TAGS #size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #region-us
# ak-fandom-20230821-raw A dataset generated from the dump of Arknights Fandom wiki.
[ "# ak-fandom-20230821-raw\n\nA dataset generated from the dump of Arknights Fandom wiki." ]
[ "TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #region-us \n", "# ak-fandom-20230821-raw\n\nA dataset generated from the dump of Arknights Fandom wiki." ]
[ 33, 27 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #region-us \n# ak-fandom-20230821-raw\n\nA dataset generated from the dump of Arknights Fandom wiki." ]
1496e159e95d785b6667ffca410ba4316394f551
# Dataset Card for "covid-tweet-sentiment-analyzer-roberta-latest-data" 1. **input_ids:** - `input_ids` represent the input to a natural language processing (NLP) model in the form of tokenized and numerical values. - These are the tokenized versions of the text data, where words and tokens are converted to unique numerical identifiers. - These numerical values enable the model to understand and process the text data, making it suitable for machine learning algorithms. 2. **attention_mask:** - `attention_mask` is a companion to `input_ids` and is used to indicate which parts of the input sequence should be attended to by the model and which parts should be ignored. - The attention mask is important for maintaining the structure and integrity of the input data while accommodating variations in text length. 3. **labels:** - `labels` refer to the target values that the model is trying to predict. - These are '1' for neutral, '2' for positive, and '0' for negative sentiment.
snyamson/covid-tweet-sentiment-analyzer-roberta-latest-data
[ "region:us" ]
2023-10-29T09:44:49+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10366704, "num_examples": 7999}, {"name": "val", "num_bytes": 2592000, "num_examples": 2000}], "download_size": 575509, "dataset_size": 12958704}}
2023-10-29T09:49:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "covid-tweet-sentiment-analyzer-roberta-latest-data" 1. input_ids: - 'input_ids' represent the input to a natural language processing (NLP) model in the form of tokenized and numerical values. - These are the tokenized versions of the text data, where words and tokens are converted to unique numerical identifiers. - These numerical values enable the model to understand and process the text data, making it suitable for machine learning algorithms. 2. attention_mask: - 'attention_mask' is a companion to 'input_ids' and is used to indicate which parts of the input sequence should be attended to by the model and which parts should be ignored. - The attention mask is important for maintaining the structure and integrity of the input data while accommodating variations in text length. 3. labels: - 'labels' refer to the target values that the model is trying to predict. - These are '1' for neutral, '2' for positive, and '0' for negative sentiment.
[ "# Dataset Card for \"covid-tweet-sentiment-analyzer-roberta-latest-data\"\n\n\n1. input_ids:\n - 'input_ids' represent the input to a natural language processing (NLP) model in the form of tokenized and numerical values.\n - These are the tokenized versions of the text data, where words and tokens are converted to unique numerical identifiers.\n - These numerical values enable the model to understand and process the text data, making it suitable for machine learning algorithms.\n\n2. attention_mask:\n - 'attention_mask' is a companion to 'input_ids' and is used to indicate which parts of the input sequence should be attended to by the model and which parts should be ignored.\n - The attention mask is important for maintaining the structure and integrity of the input data while accommodating variations in text length.\n\n3. labels:\n - 'labels' refer to the target values that the model is trying to predict.\n - These are '1' for neutral, '2' for positive, and '0' for negative sentiment." ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"covid-tweet-sentiment-analyzer-roberta-latest-data\"\n\n\n1. input_ids:\n - 'input_ids' represent the input to a natural language processing (NLP) model in the form of tokenized and numerical values.\n - These are the tokenized versions of the text data, where words and tokens are converted to unique numerical identifiers.\n - These numerical values enable the model to understand and process the text data, making it suitable for machine learning algorithms.\n\n2. attention_mask:\n - 'attention_mask' is a companion to 'input_ids' and is used to indicate which parts of the input sequence should be attended to by the model and which parts should be ignored.\n - The attention mask is important for maintaining the structure and integrity of the input data while accommodating variations in text length.\n\n3. labels:\n - 'labels' refer to the target values that the model is trying to predict.\n - These are '1' for neutral, '2' for positive, and '0' for negative sentiment." ]
[ 6, 245 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"covid-tweet-sentiment-analyzer-roberta-latest-data\"\n\n\n1. input_ids:\n - 'input_ids' represent the input to a natural language processing (NLP) model in the form of tokenized and numerical values.\n - These are the tokenized versions of the text data, where words and tokens are converted to unique numerical identifiers.\n - These numerical values enable the model to understand and process the text data, making it suitable for machine learning algorithms.\n\n2. attention_mask:\n - 'attention_mask' is a companion to 'input_ids' and is used to indicate which parts of the input sequence should be attended to by the model and which parts should be ignored.\n - The attention mask is important for maintaining the structure and integrity of the input data while accommodating variations in text length.\n\n3. labels:\n - 'labels' refer to the target values that the model is trying to predict.\n - These are '1' for neutral, '2' for positive, and '0' for negative sentiment." ]
861fba3d403b9592f5c699bbf5406071306a6415
A filtered version of the open access collection of philosophy publications [PhilPapers](https://philpapers.org/), data-ready for The-Pile. - Script https://github.com/thoppe/The-Pile-PhilPapers - Date: `2023-10-28` - Total number of documents: 54,502 - Format: gzipped JSON line files (.jsonl.gz)
malteos/philpapers-2023-10-28
[ "task_categories:text-generation", "language:en", "region:us" ]
2023-10-29T10:31:30+00:00
{"language": ["en"], "task_categories": ["text-generation"]}
2023-10-29T10:38:53+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #language-English #region-us
A filtered version of the open access collection of philosophy publications PhilPapers, data-ready for The-Pile. - Script URL - Date: '2023-10-28' - Total number of documents: 54,502 - Format: gzipped JSON line files (.URL)
[]
[ "TAGS\n#task_categories-text-generation #language-English #region-us \n" ]
[ 21 ]
[ "passage: TAGS\n#task_categories-text-generation #language-English #region-us \n" ]
5d8cf5cd2e56a372ee112c2721547a5d4b7af47d
# Dataset Card for "all_pdf_dataset_1029_416data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
li-ping/all_pdf_dataset_1029_416data
[ "region:us" ]
2023-10-29T10:32:04+00:00
{"dataset_info": {"features": [{"name": "set", "struct": [{"name": "neg", "sequence": "string"}, {"name": "pos", "sequence": "string"}, {"name": "query", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 20453375, "num_examples": 8072}], "download_size": 698908, "dataset_size": 20453375}}
2023-10-29T10:35:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for "all_pdf_dataset_1029_416data" More Information needed
[ "# Dataset Card for \"all_pdf_dataset_1029_416data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"all_pdf_dataset_1029_416data\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"all_pdf_dataset_1029_416data\"\n\nMore Information needed" ]
f7f529e51d40f38a833d4bfa78640ab91514a23b
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
piecake/mulqa
[ "region:us" ]
2023-10-29T10:36:23+00:00
{}
2023-10-29T10:57:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
c99cbdbdff81ebaff9d2aa977a8920a94b558419
# JamALT: A Formatting-Aware Lyrics Transcription Benchmark ## Dataset description * **Project page:** https://audioshake.github.io/jam-alt/ * **Source code:** https://github.com/audioshake/alt-eval * **Paper:** https://arxiv.org/abs/2311.13987 JamALT is a revision of the [JamendoLyrics](https://github.com/f90/jamendolyrics) dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark. The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling, punctuation, and formatting. The audio is identical to the JamendoLyrics dataset. However, only 79 songs are included, as one of the 20 French songs (`La_Fin_des_Temps_-_BuzzBonBon`) has been removed due to concerns about potentially harmful content. **Note:** The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly. See the [project website](https://audioshake.github.io/jam-alt/) for details. ## Loading the data ```python from datasets import load_dataset dataset = load_dataset("audioshake/jam-alt")["test"] ``` A subset is defined for each language (`en`, `fr`, `de`, `es`); for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs. By default, the dataset comes with audio. To skip loading the audio, use `with_audio=False`. To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`. Useful arguments to `datasets.Audio()` are: - `sampling_rate` and `mono=True` to control the sampling rate and number of channels. - `decode=False` to skip decoding the audio and just get the MP3 file paths. ## Running the benchmark The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval): ```python from datasets import load_dataset from alt_eval import compute_metrics dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"] # transcriptions: list[str] compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) ``` For example, the following code can be used to evaluate Whisper: ```python dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0")["test"] dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it model = whisper.load_model("tiny") transcriptions = [ "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"]) for a in dataset["audio"] ] compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) ``` Alternatively, if you already have transcriptions, you might prefer to skip loading the audio: ```python dataset = load_dataset("audioshake/jam-alt", revision="v1.0.0", with_audio=False)["test"] ``` ## Citation When using the benchmark, please cite [our paper](https://arxiv.org/abs/2311.13987) as well as the original [JamendoLyrics paper](https://arxiv.org/abs/2306.07744): ```bibtex @misc{cifka-2023-jam-alt, author = {Ond\v{r}ej C\'ifka and Constantinos Dimitriou and {Cheng-i} Wang and Hendrik Schreiber and Luke Miner and Fabian-Robert St\"oter}, title = {{Jam-ALT}: A Formatting-Aware Lyrics Transcription Benchmark}, eprint = {arXiv:2311.13987}, year = 2023 } @inproceedings{durand-2023-contrastive, author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian}, booktitle={2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages}, year={2023}, pages={1-5}, address={Rhodes Island, Greece}, doi={10.1109/ICASSP49357.2023.10096725} } ```
audioshake/jam-alt
[ "task_categories:automatic-speech-recognition", "multilinguality:multilingual", "language:en", "language:fr", "language:de", "language:es", "music", "lyrics", "evaluation", "benchmark", "transcription", "arxiv:2311.13987", "arxiv:2306.07744", "doi:10.57967/hf/1340", "region:us" ]
2023-10-29T11:04:32+00:00
{"language": ["en", "fr", "de", "es"], "multilinguality": ["multilingual"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "jam-alt", "pretty_name": "JamALT: A Formatting-Aware Lyrics Transcription Benchmark", "tags": ["music", "lyrics", "evaluation", "benchmark", "transcription"]}
2023-11-27T12:46:27+00:00
[ "2311.13987", "2306.07744" ]
[ "en", "fr", "de", "es" ]
TAGS #task_categories-automatic-speech-recognition #multilinguality-multilingual #language-English #language-French #language-German #language-Spanish #music #lyrics #evaluation #benchmark #transcription #arxiv-2311.13987 #arxiv-2306.07744 #doi-10.57967/hf/1340 #region-us
# JamALT: A Formatting-Aware Lyrics Transcription Benchmark ## Dataset description * Project page: URL * Source code: URL * Paper: URL JamALT is a revision of the JamendoLyrics dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark. The lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling, punctuation, and formatting. The audio is identical to the JamendoLyrics dataset. However, only 79 songs are included, as one of the 20 French songs ('La_Fin_des_Temps_-_BuzzBonBon') has been removed due to concerns about potentially harmful content. Note: The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly. See the project website for details. ## Loading the data A subset is defined for each language ('en', 'fr', 'de', 'es'); for example, use 'load_dataset("audioshake/jam-alt", "es")' to load only the Spanish songs. By default, the dataset comes with audio. To skip loading the audio, use 'with_audio=False'. To control how the audio is decoded, cast the 'audio' column using 'dataset.cast_column("audio", datasets.Audio(...))'. Useful arguments to 'datasets.Audio()' are: - 'sampling_rate' and 'mono=True' to control the sampling rate and number of channels. - 'decode=False' to skip decoding the audio and just get the MP3 file paths. ## Running the benchmark The evaluation is implemented in our 'alt-eval' package: For example, the following code can be used to evaluate Whisper: Alternatively, if you already have transcriptions, you might prefer to skip loading the audio: When using the benchmark, please cite our paper as well as the original JamendoLyrics paper:
[ "# JamALT: A Formatting-Aware Lyrics Transcription Benchmark", "## Dataset description\n\n* Project page: URL\n* Source code: URL\n* Paper: URL\n\nJamALT is a revision of the JamendoLyrics dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.\n\nThe lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling, punctuation, and formatting.\nThe audio is identical to the JamendoLyrics dataset.\nHowever, only 79 songs are included, as one of the 20 French songs ('La_Fin_des_Temps_-_BuzzBonBon') has been removed due to concerns about potentially harmful content.\n\nNote: The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly.\n\nSee the project website for details.", "## Loading the data\n\n\n\nA subset is defined for each language ('en', 'fr', 'de', 'es');\nfor example, use 'load_dataset(\"audioshake/jam-alt\", \"es\")' to load only the Spanish songs.\n\nBy default, the dataset comes with audio. To skip loading the audio, use 'with_audio=False'.\nTo control how the audio is decoded, cast the 'audio' column using 'dataset.cast_column(\"audio\", datasets.Audio(...))'.\nUseful arguments to 'datasets.Audio()' are:\n- 'sampling_rate' and 'mono=True' to control the sampling rate and number of channels.\n- 'decode=False' to skip decoding the audio and just get the MP3 file paths.", "## Running the benchmark\n\nThe evaluation is implemented in our 'alt-eval' package:\n\n\nFor example, the following code can be used to evaluate Whisper:\n\nAlternatively, if you already have transcriptions, you might prefer to skip loading the audio:\n\n\nWhen using the benchmark, please cite our paper as well as the original JamendoLyrics paper:" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #multilinguality-multilingual #language-English #language-French #language-German #language-Spanish #music #lyrics #evaluation #benchmark #transcription #arxiv-2311.13987 #arxiv-2306.07744 #doi-10.57967/hf/1340 #region-us \n", "# JamALT: A Formatting-Aware Lyrics Transcription Benchmark", "## Dataset description\n\n* Project page: URL\n* Source code: URL\n* Paper: URL\n\nJamALT is a revision of the JamendoLyrics dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.\n\nThe lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling, punctuation, and formatting.\nThe audio is identical to the JamendoLyrics dataset.\nHowever, only 79 songs are included, as one of the 20 French songs ('La_Fin_des_Temps_-_BuzzBonBon') has been removed due to concerns about potentially harmful content.\n\nNote: The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly.\n\nSee the project website for details.", "## Loading the data\n\n\n\nA subset is defined for each language ('en', 'fr', 'de', 'es');\nfor example, use 'load_dataset(\"audioshake/jam-alt\", \"es\")' to load only the Spanish songs.\n\nBy default, the dataset comes with audio. To skip loading the audio, use 'with_audio=False'.\nTo control how the audio is decoded, cast the 'audio' column using 'dataset.cast_column(\"audio\", datasets.Audio(...))'.\nUseful arguments to 'datasets.Audio()' are:\n- 'sampling_rate' and 'mono=True' to control the sampling rate and number of channels.\n- 'decode=False' to skip decoding the audio and just get the MP3 file paths.", "## Running the benchmark\n\nThe evaluation is implemented in our 'alt-eval' package:\n\n\nFor example, the following code can be used to evaluate Whisper:\n\nAlternatively, if you already have transcriptions, you might prefer to skip loading the audio:\n\n\nWhen using the benchmark, please cite our paper as well as the original JamendoLyrics paper:" ]
[ 94, 17, 211, 203, 78 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #multilinguality-multilingual #language-English #language-French #language-German #language-Spanish #music #lyrics #evaluation #benchmark #transcription #arxiv-2311.13987 #arxiv-2306.07744 #doi-10.57967/hf/1340 #region-us \n# JamALT: A Formatting-Aware Lyrics Transcription Benchmark## Dataset description\n\n* Project page: URL\n* Source code: URL\n* Paper: URL\n\nJamALT is a revision of the JamendoLyrics dataset (80 songs in 4 languages), adapted for use as an automatic lyrics transcription (ALT) benchmark.\n\nThe lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling, punctuation, and formatting.\nThe audio is identical to the JamendoLyrics dataset.\nHowever, only 79 songs are included, as one of the 20 French songs ('La_Fin_des_Temps_-_BuzzBonBon') has been removed due to concerns about potentially harmful content.\n\nNote: The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly.\n\nSee the project website for details." ]
9d0262142f5017c8a3874e09208ec8bb2411db8f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Kirie/test-bert-base-banking77 * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@i got my credit card](https://huggingface.co/i got my credit card) for evaluating this model.
autoevaluate/autoeval-eval-banking77-default-b28a77-98055146974
[ "autotrain", "evaluation", "region:us" ]
2023-10-29T12:05:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "Kirie/test-bert-base-banking77", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2023-10-29T12:06:34+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: Kirie/test-bert-base-banking77 * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @i got my credit card for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Kirie/test-bert-base-banking77\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @i got my credit card for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Kirie/test-bert-base-banking77\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @i got my credit card for evaluating this model." ]
[ 13, 91, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Kirie/test-bert-base-banking77\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @i got my credit card for evaluating this model." ]
b1c4ed68e5d0d4d8429f380e033a3c2a25209c63
# Dataset Card for "villm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cideon00/villm
[ "region:us" ]
2023-10-29T12:35:29+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tok_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1411182336.1899912, "num_examples": 512774}], "download_size": 328694427, "dataset_size": 1411182336.1899912}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T12:35:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "villm" More Information needed
[ "# Dataset Card for \"villm\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"villm\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"villm\"\n\nMore Information needed" ]
39a1676300cfa93705a3211580263cc30b1d1c98
# Dataset Card for "ingredient-detection-layout-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
raphael0202/ingredient-detection-layout-dataset
[ "region:us" ]
2023-10-29T12:49:48+00:00
{"dataset_info": {"features": [{"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ING", "2": "I-ING"}}}}, {"name": "words", "sequence": "string"}, {"name": "bboxes", "sequence": {"sequence": "int64"}}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "offsets", "sequence": {"sequence": "int64"}}, {"name": "meta", "struct": [{"name": "barcode", "dtype": "string"}, {"name": "image_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "in_test_split", "dtype": "bool"}]}], "splits": [{"name": "train", "num_bytes": 2059533770.875, "num_examples": 5065}, {"name": "test", "num_bytes": 244591039.0, "num_examples": 556}], "download_size": 2271205424, "dataset_size": 2304124809.875}}
2023-11-01T16:22:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ingredient-detection-layout-dataset" More Information needed
[ "# Dataset Card for \"ingredient-detection-layout-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ingredient-detection-layout-dataset\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ingredient-detection-layout-dataset\"\n\nMore Information needed" ]
4ee5df1a89a1c8f07fb063222a690e1c1d7f0867
# Dataset Card for "OpenOrca" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SUSTech/OpenOrca
[ "region:us" ]
2023-10-29T13:06:29+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "niv_gpt4", "path": "data/niv_gpt4-*"}, {"split": "flan_gpt4", "path": "data/flan_gpt4-*"}, {"split": "t0_gpt4", "path": "data/t0_gpt4-*"}, {"split": "cot_gpt4", "path": "data/cot_gpt4-*"}, {"split": "niv_gpt35", "path": "data/niv_gpt35-*"}, {"split": "flan_gpt35", "path": "data/flan_gpt35-*"}, {"split": "t0_gpt35", "path": "data/t0_gpt35-*"}, {"split": "cot_gpt35", "path": "data/cot_gpt35-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "niv_gpt4", "num_bytes": 136902799, "num_examples": 88210}, {"name": "flan_gpt4", "num_bytes": 870570389, "num_examples": 501331}, {"name": "t0_gpt4", "num_bytes": 696675683, "num_examples": 331183}, {"name": "cot_gpt4", "num_bytes": 84381097, "num_examples": 74172}, {"name": "niv_gpt35", "num_bytes": 299906870, "num_examples": 205186}, {"name": "flan_gpt35", "num_bytes": 1531332880, "num_examples": 1147928}, {"name": "t0_gpt35", "num_bytes": 3535742489, "num_examples": 1818390}, {"name": "cot_gpt35", "num_bytes": 65787500, "num_examples": 67523}], "download_size": 4090266173, "dataset_size": 7221299707}}
2023-10-29T13:10:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OpenOrca" More Information needed
[ "# Dataset Card for \"OpenOrca\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OpenOrca\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"OpenOrca\"\n\nMore Information needed" ]
ac4ac59697f19a29d1ba284ac3e8150d5473e534
# Dataset Card for "144daf3b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/144daf3b
[ "region:us" ]
2023-10-29T13:49:52+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 174, "num_examples": 10}], "download_size": 1351, "dataset_size": 174}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T13:49:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "144daf3b" More Information needed
[ "# Dataset Card for \"144daf3b\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"144daf3b\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"144daf3b\"\n\nMore Information needed" ]
d29985e8d6ad023cd9348d89eab1afedfca14666
# Dataset Card for "6612e023" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/6612e023
[ "region:us" ]
2023-10-29T13:56:48+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 217, "num_examples": 10}], "download_size": 1380, "dataset_size": 217}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T13:56:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "6612e023" More Information needed
[ "# Dataset Card for \"6612e023\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"6612e023\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"6612e023\"\n\nMore Information needed" ]
43ea70feb2ee7026348c6b9767cc45bd3eec5730
# Dataset Card for "veshti-controlnet" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stsudharsan/veshti-controlnet
[ "region:us" ]
2023-10-29T13:58:09+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_img", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14599706.0, "num_examples": 143}], "download_size": 13484309, "dataset_size": 14599706.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T13:58:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "veshti-controlnet" More Information needed
[ "# Dataset Card for \"veshti-controlnet\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"veshti-controlnet\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"veshti-controlnet\"\n\nMore Information needed" ]
d1824c1261f8e0e0d61f81a491ff8cbb92d24427
all v15.1 emojii with description and context: ex. 🚶‍♂️,man walking,man walking
LT8/Emojii_v15.1
[ "license:creativeml-openrail-m", "region:us" ]
2023-10-29T14:21:13+00:00
{"license": "creativeml-openrail-m"}
2023-10-29T19:10:50+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
all v15.1 emojii with description and context: ex. ‍️,man walking,man walking
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
[ 18 ]
[ "passage: TAGS\n#license-creativeml-openrail-m #region-us \n" ]
5410a28d93bf606666fdc0b19ae510c727591934
# aesthetic_photos_xs - 1k manually selected photos from unsplash - captioned with BLIP model large caption && SmilingWolf/wd-v1-4-convnext-tagger-v2 # repositories - https://github.com/recoilme/unsplash_dwn - https://github.com/kohya-ss/sd-scripts [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
recoilme/aesthetic_photos_xs
[ "size_categories:1K<n<10K", "art", "region:us" ]
2023-10-29T14:50:46+00:00
{"size_categories": ["1K<n<10K"], "pretty_name": "aesthetic photos xs", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1391150970.57, "num_examples": 1010}], "download_size": 1391377501, "dataset_size": 1391150970.57}, "tags": ["art"]}
2023-10-29T15:20:31+00:00
[]
[]
TAGS #size_categories-1K<n<10K #art #region-us
# aesthetic_photos_xs - 1k manually selected photos from unsplash - captioned with BLIP model large caption && SmilingWolf/wd-v1-4-convnext-tagger-v2 # repositories - URL - URL More Information needed
[ "# aesthetic_photos_xs\n\n - 1k manually selected photos from unsplash\n - captioned with BLIP model large caption && SmilingWolf/wd-v1-4-convnext-tagger-v2", "# repositories\n\n - URL\n - URL\n\nMore Information needed" ]
[ "TAGS\n#size_categories-1K<n<10K #art #region-us \n", "# aesthetic_photos_xs\n\n - 1k manually selected photos from unsplash\n - captioned with BLIP model large caption && SmilingWolf/wd-v1-4-convnext-tagger-v2", "# repositories\n\n - URL\n - URL\n\nMore Information needed" ]
[ 20, 49, 11 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #art #region-us \n# aesthetic_photos_xs\n\n - 1k manually selected photos from unsplash\n - captioned with BLIP model large caption && SmilingWolf/wd-v1-4-convnext-tagger-v2# repositories\n\n - URL\n - URL\n\nMore Information needed" ]
3deaeb118a1164e6a3314555960ece362a7139ea
# Dataset Card for "instruct_v3_5k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aditijha/instruct_v3_5k
[ "region:us" ]
2023-10-29T14:55:48+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19654811.27708441, "num_examples": 5000}], "download_size": 11429021, "dataset_size": 19654811.27708441}}
2023-10-29T14:55:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "instruct_v3_5k" More Information needed
[ "# Dataset Card for \"instruct_v3_5k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"instruct_v3_5k\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v3_5k\"\n\nMore Information needed" ]
7ac208f6575835e0a4bc7188e3f1279cf56accce
# Dataset Card for "instruct_v3_10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aditijha/instruct_v3_10k
[ "region:us" ]
2023-10-29T14:56:05+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39309622.55416882, "num_examples": 10000}], "download_size": 23617961, "dataset_size": 39309622.55416882}}
2023-10-29T14:56:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "instruct_v3_10k" More Information needed
[ "# Dataset Card for \"instruct_v3_10k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"instruct_v3_10k\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v3_10k\"\n\nMore Information needed" ]
8076605392760b61f4b67f45c64c44165c3ea615
# Dataset Card for "LLM" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kishore05/LLM
[ "region:us" ]
2023-10-29T14:58:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11048.4, "num_examples": 9}, {"name": "validation", "num_bytes": 895, "num_examples": 1}], "download_size": 20622, "dataset_size": 11943.4}}
2023-10-29T14:58:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "LLM" More Information needed
[ "# Dataset Card for \"LLM\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"LLM\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"LLM\"\n\nMore Information needed" ]
373f55fba3d877b48a86200c4944efe5fd6ecfcf
# Dataset Card for "sinhalanews" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zaanind/sinhalanews
[ "region:us" ]
2023-10-29T15:06:53+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 394182639, "num_examples": 171436}], "download_size": 147153975, "dataset_size": 394182639}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T15:07:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sinhalanews" More Information needed
[ "# Dataset Card for \"sinhalanews\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sinhalanews\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sinhalanews\"\n\nMore Information needed" ]
1973f5ce1f3679ab5a2382fe22e7ce35b97233a2
# Dataset Card for "tamil-alpaca-eval" This repository includes evaluation instructions to quickly test the Tamil LLaMA family of instruction models. To dive deep into the development and capabilities of the models, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact. **GitHub Repository:** [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama) **Note:** This is the second version of the evaluation dataset was created using [Evol Instruct](https://arxiv.org/pdf/2304.12244.pdf) methodology and GPT-4. The initial 120 questions in [Tamil-Llama-Eval.csv](https://huggingface.co/datasets/abhinand/tamil-llama-eval/blob/main/Tamil-LLaMA-Eval.csv) (v1) were used as seed instructions. ## Models evaluated using this dataset | Task Type | [Tamil-LLaMA-7B](abhinand/tamil-llama-7b-instruct-v0.1) | [Tamil-LLaMA-13B](abhinand/tamil-llama-13b-instruct-v0.1) | [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) | |-----------------|----------------|-----------------|---------------| | Question Answering | 77.00 | 75.33 | 54.33 | | Open-ended QA | 84.47 | 85.26 | 58.68 | | Reasoning | 47.50 | 64.25 | 63.50 | | Literature | 45.50 | 40.00 | 71.00 | | Entertainment | 43.33 | 50.00 | 60.00 | | Creative Writing| 92.50 | 95.62 | 59.69 | | Translation | 60.56 | 66.67 | 92.78 | | Coding | 63.57 | 76.07 | 57.14 | | Ethics | 23.75 | 57.50 | 40.00 | | **Overall** | **63.83** | **71.17** | **61.33** | ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) ## Citation If you use this model or any of the the Tamil-Llama datasets in your research, please cite: ```bibtex @misc{balachandran2023tamilllama, title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, author={Abhinand Balachandran}, year={2023}, eprint={2311.05845}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
abhinand/tamil-llama-eval
[ "task_categories:text-generation", "size_categories:n<1K", "language:ta", "license:gpl", "arxiv:2311.05845", "arxiv:2304.12244", "region:us" ]
2023-10-29T15:27:53+00:00
{"language": ["ta"], "license": "gpl", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "tamil-llama-eval", "dataset_info": {"config_name": "large", "features": [{"name": "input", "dtype": "string"}, {"name": "raw_input", "dtype": "string"}, {"name": "evol_source", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1077035, "num_examples": 956}], "download_size": 347891, "dataset_size": 1077035}, "configs": [{"config_name": "large", "data_files": [{"split": "train", "path": "large/train-*"}]}]}
2024-02-13T10:46:25+00:00
[ "2311.05845", "2304.12244" ]
[ "ta" ]
TAGS #task_categories-text-generation #size_categories-n<1K #language-Tamil #license-gpl #arxiv-2311.05845 #arxiv-2304.12244 #region-us
Dataset Card for "tamil-alpaca-eval" ==================================== This repository includes evaluation instructions to quickly test the Tamil LLaMA family of instruction models. To dive deep into the development and capabilities of the models, please read the research paper and the introductory blog post (WIP) that outlines our journey and the model's potential impact. GitHub Repository: URL Note: This is the second version of the evaluation dataset was created using Evol Instruct methodology and GPT-4. The initial 120 questions in URL (v1) were used as seed instructions. Models evaluated using this dataset ----------------------------------- Meet the Developers ------------------- Get to know the creators behind this innovative model and follow their contributions to the field: * Abhinand Balachandran If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
[]
[ "TAGS\n#task_categories-text-generation #size_categories-n<1K #language-Tamil #license-gpl #arxiv-2311.05845 #arxiv-2304.12244 #region-us \n" ]
[ 54 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-Tamil #license-gpl #arxiv-2311.05845 #arxiv-2304.12244 #region-us \n" ]
d8b0011531f5c554aba885fff54fc7325c08bdbf
Creadit: KSHITIJ KUMAR https://www.kaggle.com/datasets/kshitij192/cars-image-dataset
AlmajedA/Cars
[ "region:us" ]
2023-10-29T15:36:14+00:00
{}
2023-10-29T15:47:32+00:00
[]
[]
TAGS #region-us
Creadit: KSHITIJ KUMAR URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
5206b68722c51c2f01c52402927264e0de4d2e04
# PovertyMap-wilds: Poverty mapping across different countries ![](https://cdn-uploads.huggingface.co/production/uploads/6364f1784f773b7e4cede70c/liSf9zd0uvbH-Sf9a-wrm.png) **Homepage**: https://github.com/sustainlab-group/africa_poverty \ **Publication Date**: 2020-05-22 \ **License**: LandSat/DMSP/VIIRS data is U.S. Public Domain. \ **Citation**: ```bibtex @article{yeh2020using, author = {Yeh, Christopher and Perez, Anthony and Driscoll, Anne and Azzari, George and Tang, Zhongyi and Lobell, David and Ermon, Stefano and Burke, Marshall}, day = {22}, doi = {10.1038/s41467-020-16185-w}, issn = {2041-1723}, journal = {Nature Communications}, month = {5}, number = {1}, title = {{Using publicly available satellite imagery and deep learning to understand economic well-being in Africa}}, url = {https://www.nature.com/articles/s41467-020-16185-w}, volume = {11}, year = {2020} } ``` ## Description This is a processed version of LandSat 5/7/8 satellite imagery originally from Google Earth Engine under the names `LANDSAT/LC08/C01/T1_SR`,`LANDSAT/LE07/C01/T1_SR`,`LANDSAT/LT05/C01/T1_SR`, nighttime light imagery from the DMSP and VIIRS satellites (Google Earth Engine names `NOAA/DMSP-OLS/CALIBRATED_LIGHTS_V4` and `NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG`) and processed DHS survey metadata obtained from https://github.com/sustainlab-group/africa_poverty and originally from `https://dhsprogram.com/data/available-datasets.cfm`.
1aurent/PovertyMap
[ "task_categories:image-classification", "size_categories:10K<n<100K", "license:other", "map", "poverty", "satellite", "region:us" ]
2023-10-29T15:44:19+00:00
{"license": "other", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "id_val", "path": "data/id_val-*"}, {"split": "id_test", "path": "data/id_test-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "label", "dtype": "int64"}, {"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "wealthpooled", "dtype": "float64"}, {"name": "country", "dtype": "int64"}, {"name": "year", "dtype": "int64"}, {"name": "urban", "dtype": "bool"}, {"name": "nl_mean", "dtype": "float64"}, {"name": "nl_center", "dtype": "float64"}, {"name": "households", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15801660900.87406, "num_examples": 9797}, {"name": "id_val", "num_bytes": 1611295216.9003966, "num_examples": 999}, {"name": "id_test", "num_bytes": 1612908125.025422, "num_examples": 1000}, {"name": "val", "num_bytes": 6304857860.724375, "num_examples": 3909}, {"name": "test", "num_bytes": 6391954899.475747, "num_examples": 3963}], "download_size": 16974411052, "dataset_size": 31722677003}, "tags": ["map", "poverty", "satellite"]}
2023-10-29T17:05:39+00:00
[]
[]
TAGS #task_categories-image-classification #size_categories-10K<n<100K #license-other #map #poverty #satellite #region-us
# PovertyMap-wilds: Poverty mapping across different countries ![](URL Homepage: URL \ Publication Date: 2020-05-22 \ License: LandSat/DMSP/VIIRS data is U.S. Public Domain. \ Citation: ## Description This is a processed version of LandSat 5/7/8 satellite imagery originally from Google Earth Engine under the names 'LANDSAT/LC08/C01/T1_SR','LANDSAT/LE07/C01/T1_SR','LANDSAT/LT05/C01/T1_SR', nighttime light imagery from the DMSP and VIIRS satellites (Google Earth Engine names 'NOAA/DMSP-OLS/CALIBRATED_LIGHTS_V4' and 'NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG') and processed DHS survey metadata obtained from URL and originally from 'URL
[ "# PovertyMap-wilds: Poverty mapping across different countries\n\n![](URL\n\nHomepage: URL \\\nPublication Date: 2020-05-22 \\\nLicense: LandSat/DMSP/VIIRS data is U.S. Public Domain. \\\nCitation:", "## Description\n\nThis is a processed version of LandSat 5/7/8 satellite imagery originally from Google Earth Engine under the names 'LANDSAT/LC08/C01/T1_SR','LANDSAT/LE07/C01/T1_SR','LANDSAT/LT05/C01/T1_SR',\nnighttime light imagery from the DMSP and VIIRS satellites (Google Earth Engine names 'NOAA/DMSP-OLS/CALIBRATED_LIGHTS_V4' and 'NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG')\nand processed DHS survey metadata obtained from URL and originally from 'URL" ]
[ "TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #license-other #map #poverty #satellite #region-us \n", "# PovertyMap-wilds: Poverty mapping across different countries\n\n![](URL\n\nHomepage: URL \\\nPublication Date: 2020-05-22 \\\nLicense: LandSat/DMSP/VIIRS data is U.S. Public Domain. \\\nCitation:", "## Description\n\nThis is a processed version of LandSat 5/7/8 satellite imagery originally from Google Earth Engine under the names 'LANDSAT/LC08/C01/T1_SR','LANDSAT/LE07/C01/T1_SR','LANDSAT/LT05/C01/T1_SR',\nnighttime light imagery from the DMSP and VIIRS satellites (Google Earth Engine names 'NOAA/DMSP-OLS/CALIBRATED_LIGHTS_V4' and 'NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG')\nand processed DHS survey metadata obtained from URL and originally from 'URL" ]
[ 44, 61, 157 ]
[ "passage: TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #license-other #map #poverty #satellite #region-us \n# PovertyMap-wilds: Poverty mapping across different countries\n\n![](URL\n\nHomepage: URL \\\nPublication Date: 2020-05-22 \\\nLicense: LandSat/DMSP/VIIRS data is U.S. Public Domain. \\\nCitation:## Description\n\nThis is a processed version of LandSat 5/7/8 satellite imagery originally from Google Earth Engine under the names 'LANDSAT/LC08/C01/T1_SR','LANDSAT/LE07/C01/T1_SR','LANDSAT/LT05/C01/T1_SR',\nnighttime light imagery from the DMSP and VIIRS satellites (Google Earth Engine names 'NOAA/DMSP-OLS/CALIBRATED_LIGHTS_V4' and 'NOAA/VIIRS/DNB/MONTHLY_V1/VCMSLCFG')\nand processed DHS survey metadata obtained from URL and originally from 'URL" ]
9ba9080ac01b69b23d998ec178662b202495f6ed
# Dataset Card for Reduced Medical Q&A Dataset This dataset card provides comprehensive details about the Reduced Medical Q&A Dataset, which is a curated and balanced subset aimed for healthcare dialogues and medical NLP research. ## Dataset Details ### Dataset Description The Reduced Medical Q&A Dataset is derived from a specialized subset of the larger MedDialog collection. It focuses on healthcare dialogues between doctors and patients from sources like WebMD, Icliniq, HealthcareMagic, and HealthTap. The dataset contains approximately 3,000 rows and is intended for a variety of applications such as NLP research, healthcare chatbot development, and medical information retrieval. - **Curated by:** Unknown (originally from MedDialog) - **Funded by [optional]:** N/A - **Shared by [optional]:** N/A - **Language(s) (NLP):** English - **License:** Unknown (assumed for educational/research use) ### Dataset Sources [optional] - **Repository:** N/A - **Paper [optional]:** N/A - **Demo [optional]:** N/A ## Uses ### Direct Use - NLP research in healthcare dialogues - Development of healthcare question-answering systems - Medical information retrieval ### Out-of-Scope Use - Not a substitute for certified medical advice - Exercise caution in critical healthcare applications ## Dataset Structure Each entry in the dataset follows the structure: "### Human:\n[Human's text]\n\n### Assistant: [Assistant's text]" ## Dataset Creation ### Curation Rationale The dataset was curated to create a balanced set of medical Q&A pairs using keyword-based sampling to cover a wide range of medical topics. ### Source Data #### Data Collection and Processing The data is text-based, primarily in English, and was curated from the larger "Medical" dataset featuring dialogues from Icliniq, HealthcareMagic, and HealthTap. #### Who are the source data producers? The original data was produced by healthcare professionals and patients engaging in medical dialogues on platforms like Icliniq, HealthcareMagic, and HealthTap. ### Annotations [optional] No additional annotations; the dataset is text-based. ## Bias, Risks, and Limitations - The dataset is not a substitute for professional medical advice. - It is designed for research and educational purposes only. ### Recommendations Users should exercise caution and understand the limitations when using the dataset for critical healthcare applications. ## Citation [optional] N/A ## Glossary [optional] N/A ## More Information [optional] N/A ## Dataset Card Authors [optional] N/A ## Dataset Card Contact N/A
Kabatubare/medical-guanaco-3000
[ "language:en", "license:unknown", "healthcare", "Q&A", "NLP", "dialogues", "region:us" ]
2023-10-29T15:49:46+00:00
{"language": "en", "license": "unknown", "pretty_name": "Medical Q&A Dataset", "title": "Reduced Medical Q&A Dataset", "tags": ["healthcare", "Q&A", "NLP", "dialogues"]}
2023-10-30T09:59:47+00:00
[]
[ "en" ]
TAGS #language-English #license-unknown #healthcare #Q&A #NLP #dialogues #region-us
# Dataset Card for Reduced Medical Q&A Dataset This dataset card provides comprehensive details about the Reduced Medical Q&A Dataset, which is a curated and balanced subset aimed for healthcare dialogues and medical NLP research. ## Dataset Details ### Dataset Description The Reduced Medical Q&A Dataset is derived from a specialized subset of the larger MedDialog collection. It focuses on healthcare dialogues between doctors and patients from sources like WebMD, Icliniq, HealthcareMagic, and HealthTap. The dataset contains approximately 3,000 rows and is intended for a variety of applications such as NLP research, healthcare chatbot development, and medical information retrieval. - Curated by: Unknown (originally from MedDialog) - Funded by [optional]: N/A - Shared by [optional]: N/A - Language(s) (NLP): English - License: Unknown (assumed for educational/research use) ### Dataset Sources [optional] - Repository: N/A - Paper [optional]: N/A - Demo [optional]: N/A ## Uses ### Direct Use - NLP research in healthcare dialogues - Development of healthcare question-answering systems - Medical information retrieval ### Out-of-Scope Use - Not a substitute for certified medical advice - Exercise caution in critical healthcare applications ## Dataset Structure Each entry in the dataset follows the structure: "### Human:\n[Human's text]\n\n### Assistant: [Assistant's text]" ## Dataset Creation ### Curation Rationale The dataset was curated to create a balanced set of medical Q&A pairs using keyword-based sampling to cover a wide range of medical topics. ### Source Data #### Data Collection and Processing The data is text-based, primarily in English, and was curated from the larger "Medical" dataset featuring dialogues from Icliniq, HealthcareMagic, and HealthTap. #### Who are the source data producers? The original data was produced by healthcare professionals and patients engaging in medical dialogues on platforms like Icliniq, HealthcareMagic, and HealthTap. ### Annotations [optional] No additional annotations; the dataset is text-based. ## Bias, Risks, and Limitations - The dataset is not a substitute for professional medical advice. - It is designed for research and educational purposes only. ### Recommendations Users should exercise caution and understand the limitations when using the dataset for critical healthcare applications. [optional] N/A ## Glossary [optional] N/A ## More Information [optional] N/A ## Dataset Card Authors [optional] N/A ## Dataset Card Contact N/A
[ "# Dataset Card for Reduced Medical Q&A Dataset\n\nThis dataset card provides comprehensive details about the Reduced Medical Q&A Dataset, which is a curated and balanced subset aimed for healthcare dialogues and medical NLP research.", "## Dataset Details", "### Dataset Description\n\nThe Reduced Medical Q&A Dataset is derived from a specialized subset of the larger MedDialog collection. It focuses on healthcare dialogues between doctors and patients from sources like WebMD, Icliniq, HealthcareMagic, and HealthTap. The dataset contains approximately 3,000 rows and is intended for a variety of applications such as NLP research, healthcare chatbot development, and medical information retrieval.\n\n- Curated by: Unknown (originally from MedDialog)\n- Funded by [optional]: N/A\n- Shared by [optional]: N/A\n- Language(s) (NLP): English\n- License: Unknown (assumed for educational/research use)", "### Dataset Sources [optional]\n\n- Repository: N/A\n- Paper [optional]: N/A\n- Demo [optional]: N/A", "## Uses", "### Direct Use\n\n- NLP research in healthcare dialogues\n- Development of healthcare question-answering systems\n- Medical information retrieval", "### Out-of-Scope Use\n\n- Not a substitute for certified medical advice\n- Exercise caution in critical healthcare applications", "## Dataset Structure\n\nEach entry in the dataset follows the structure: \"### Human:\\n[Human's text]\\n\\n### Assistant: [Assistant's text]\"", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was curated to create a balanced set of medical Q&A pairs using keyword-based sampling to cover a wide range of medical topics.", "### Source Data", "#### Data Collection and Processing\n\nThe data is text-based, primarily in English, and was curated from the larger \"Medical\" dataset featuring dialogues from Icliniq, HealthcareMagic, and HealthTap.", "#### Who are the source data producers?\n\nThe original data was produced by healthcare professionals and patients engaging in medical dialogues on platforms like Icliniq, HealthcareMagic, and HealthTap.", "### Annotations [optional]\n\nNo additional annotations; the dataset is text-based.", "## Bias, Risks, and Limitations\n\n- The dataset is not a substitute for professional medical advice.\n- It is designed for research and educational purposes only.", "### Recommendations\n\nUsers should exercise caution and understand the limitations when using the dataset for critical healthcare applications.\n\n[optional]\n\nN/A", "## Glossary [optional]\n\nN/A", "## More Information [optional]\n\nN/A", "## Dataset Card Authors [optional]\n\nN/A", "## Dataset Card Contact\n\nN/A" ]
[ "TAGS\n#language-English #license-unknown #healthcare #Q&A #NLP #dialogues #region-us \n", "# Dataset Card for Reduced Medical Q&A Dataset\n\nThis dataset card provides comprehensive details about the Reduced Medical Q&A Dataset, which is a curated and balanced subset aimed for healthcare dialogues and medical NLP research.", "## Dataset Details", "### Dataset Description\n\nThe Reduced Medical Q&A Dataset is derived from a specialized subset of the larger MedDialog collection. It focuses on healthcare dialogues between doctors and patients from sources like WebMD, Icliniq, HealthcareMagic, and HealthTap. The dataset contains approximately 3,000 rows and is intended for a variety of applications such as NLP research, healthcare chatbot development, and medical information retrieval.\n\n- Curated by: Unknown (originally from MedDialog)\n- Funded by [optional]: N/A\n- Shared by [optional]: N/A\n- Language(s) (NLP): English\n- License: Unknown (assumed for educational/research use)", "### Dataset Sources [optional]\n\n- Repository: N/A\n- Paper [optional]: N/A\n- Demo [optional]: N/A", "## Uses", "### Direct Use\n\n- NLP research in healthcare dialogues\n- Development of healthcare question-answering systems\n- Medical information retrieval", "### Out-of-Scope Use\n\n- Not a substitute for certified medical advice\n- Exercise caution in critical healthcare applications", "## Dataset Structure\n\nEach entry in the dataset follows the structure: \"### Human:\\n[Human's text]\\n\\n### Assistant: [Assistant's text]\"", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was curated to create a balanced set of medical Q&A pairs using keyword-based sampling to cover a wide range of medical topics.", "### Source Data", "#### Data Collection and Processing\n\nThe data is text-based, primarily in English, and was curated from the larger \"Medical\" dataset featuring dialogues from Icliniq, HealthcareMagic, and HealthTap.", "#### Who are the source data producers?\n\nThe original data was produced by healthcare professionals and patients engaging in medical dialogues on platforms like Icliniq, HealthcareMagic, and HealthTap.", "### Annotations [optional]\n\nNo additional annotations; the dataset is text-based.", "## Bias, Risks, and Limitations\n\n- The dataset is not a substitute for professional medical advice.\n- It is designed for research and educational purposes only.", "### Recommendations\n\nUsers should exercise caution and understand the limitations when using the dataset for critical healthcare applications.\n\n[optional]\n\nN/A", "## Glossary [optional]\n\nN/A", "## More Information [optional]\n\nN/A", "## Dataset Card Authors [optional]\n\nN/A", "## Dataset Card Contact\n\nN/A" ]
[ 31, 56, 4, 170, 38, 3, 29, 29, 45, 5, 43, 4, 50, 45, 23, 36, 35, 11, 10, 13, 8 ]
[ "passage: TAGS\n#language-English #license-unknown #healthcare #Q&A #NLP #dialogues #region-us \n# Dataset Card for Reduced Medical Q&A Dataset\n\nThis dataset card provides comprehensive details about the Reduced Medical Q&A Dataset, which is a curated and balanced subset aimed for healthcare dialogues and medical NLP research.## Dataset Details### Dataset Description\n\nThe Reduced Medical Q&A Dataset is derived from a specialized subset of the larger MedDialog collection. It focuses on healthcare dialogues between doctors and patients from sources like WebMD, Icliniq, HealthcareMagic, and HealthTap. The dataset contains approximately 3,000 rows and is intended for a variety of applications such as NLP research, healthcare chatbot development, and medical information retrieval.\n\n- Curated by: Unknown (originally from MedDialog)\n- Funded by [optional]: N/A\n- Shared by [optional]: N/A\n- Language(s) (NLP): English\n- License: Unknown (assumed for educational/research use)### Dataset Sources [optional]\n\n- Repository: N/A\n- Paper [optional]: N/A\n- Demo [optional]: N/A## Uses### Direct Use\n\n- NLP research in healthcare dialogues\n- Development of healthcare question-answering systems\n- Medical information retrieval### Out-of-Scope Use\n\n- Not a substitute for certified medical advice\n- Exercise caution in critical healthcare applications## Dataset Structure\n\nEach entry in the dataset follows the structure: \"### Human:\\n[Human's text]\\n\\n### Assistant: [Assistant's text]\"## Dataset Creation### Curation Rationale\n\nThe dataset was curated to create a balanced set of medical Q&A pairs using keyword-based sampling to cover a wide range of medical topics.### Source Data#### Data Collection and Processing\n\nThe data is text-based, primarily in English, and was curated from the larger \"Medical\" dataset featuring dialogues from Icliniq, HealthcareMagic, and HealthTap." ]
1c8895d932e0a21e2cec4e719c37c92cb6f63e37
# Dataset Card for "1abdaff0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/1abdaff0
[ "region:us" ]
2023-10-29T16:21:52+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 208, "num_examples": 10}], "download_size": 1389, "dataset_size": 208}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-29T16:21:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "1abdaff0" More Information needed
[ "# Dataset Card for \"1abdaff0\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"1abdaff0\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"1abdaff0\"\n\nMore Information needed" ]
b4eb21862b987f2070cbf9e4acb29d2496b5078c
# Dataset Card for "autotrain-data-l840-cwyf-0kjj" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
digitalwas-solutions/midjourney-prompts
[ "region:us" ]
2023-10-29T16:49:51+00:00
{"dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77100, "num_examples": 288}, {"name": "validation", "num_bytes": 77100, "num_examples": 288}], "download_size": 47998, "dataset_size": 154200}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2023-10-29T16:49:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotrain-data-l840-cwyf-0kjj" More Information needed
[ "# Dataset Card for \"autotrain-data-l840-cwyf-0kjj\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotrain-data-l840-cwyf-0kjj\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-l840-cwyf-0kjj\"\n\nMore Information needed" ]