sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
0209a9a48b98a93ea5ec22a9ae21c4309f4ee37b
### Dataset Description Legal Contracts Dataset for Training SpaCy NER Model This repository contains a specially curated dataset consisting of legal contracts. It is designed for the purpose of training a Named Entity Recognition (NER) model using SpaCy, with the aim to recognize and classify four types of entities in the text: Contract Type, Clause Title, Clause Number, Definition Title The dataset includes a broad variety of legal contracts, covering diverse domains such as employment, real estate, services, sale, lease, etc. Entities in the text have been manually labeled by experts in the field, ensuring high-quality training data for the model. Each document in the dataset has been annotated in the following format: (Start_Position, End_Position, Entity_Label) For example, a clause title may be annotated as follows: (102, 115, 'clause title') This will assist the NER model in identifying not only the text of the entity, but also its position within the document. Usage Guidelines The dataset can be loaded into a SpaCy pipeline for training a NER model. For more information on how to train a NER model using SpaCy, please refer to the SpaCy documentation.
lawinsider/uk_ner_contracts_spacy
[ "task_categories:token-classification", "language:uk", "legal", "region:us" ]
2023-11-15T12:17:27+00:00
{"language": ["uk"], "task_categories": ["token-classification"], "pretty_name": "uk NER contracts", "tags": ["legal"]}
2023-12-19T13:00:40+00:00
[]
[ "uk" ]
TAGS #task_categories-token-classification #language-Ukrainian #legal #region-us
### Dataset Description Legal Contracts Dataset for Training SpaCy NER Model This repository contains a specially curated dataset consisting of legal contracts. It is designed for the purpose of training a Named Entity Recognition (NER) model using SpaCy, with the aim to recognize and classify four types of entities in the text: Contract Type, Clause Title, Clause Number, Definition Title The dataset includes a broad variety of legal contracts, covering diverse domains such as employment, real estate, services, sale, lease, etc. Entities in the text have been manually labeled by experts in the field, ensuring high-quality training data for the model. Each document in the dataset has been annotated in the following format: (Start_Position, End_Position, Entity_Label) For example, a clause title may be annotated as follows: (102, 115, 'clause title') This will assist the NER model in identifying not only the text of the entity, but also its position within the document. Usage Guidelines The dataset can be loaded into a SpaCy pipeline for training a NER model. For more information on how to train a NER model using SpaCy, please refer to the SpaCy documentation.
[ "### Dataset Description\n\nLegal Contracts Dataset for Training SpaCy NER Model\nThis repository contains a specially curated dataset consisting of legal contracts. It is designed for the purpose of training a Named Entity Recognition (NER) model using SpaCy, with the aim to recognize and classify four types of entities in the text:\n\nContract Type,\nClause Title,\nClause Number,\nDefinition Title\n\n\nThe dataset includes a broad variety of legal contracts, covering diverse domains such as employment, real estate, services, sale, lease, etc.\n\nEntities in the text have been manually labeled by experts in the field, ensuring high-quality training data for the model.\n\nEach document in the dataset has been annotated in the following format:\n\n(Start_Position, End_Position, Entity_Label)\n\nFor example, a clause title may be annotated as follows: (102, 115, 'clause title')\n\nThis will assist the NER model in identifying not only the text of the entity, but also its position within the document.\n\nUsage Guidelines\nThe dataset can be loaded into a SpaCy pipeline for training a NER model. For more information on how to train a NER model using SpaCy, please refer to the SpaCy documentation." ]
[ "TAGS\n#task_categories-token-classification #language-Ukrainian #legal #region-us \n", "### Dataset Description\n\nLegal Contracts Dataset for Training SpaCy NER Model\nThis repository contains a specially curated dataset consisting of legal contracts. It is designed for the purpose of training a Named Entity Recognition (NER) model using SpaCy, with the aim to recognize and classify four types of entities in the text:\n\nContract Type,\nClause Title,\nClause Number,\nDefinition Title\n\n\nThe dataset includes a broad variety of legal contracts, covering diverse domains such as employment, real estate, services, sale, lease, etc.\n\nEntities in the text have been manually labeled by experts in the field, ensuring high-quality training data for the model.\n\nEach document in the dataset has been annotated in the following format:\n\n(Start_Position, End_Position, Entity_Label)\n\nFor example, a clause title may be annotated as follows: (102, 115, 'clause title')\n\nThis will assist the NER model in identifying not only the text of the entity, but also its position within the document.\n\nUsage Guidelines\nThe dataset can be loaded into a SpaCy pipeline for training a NER model. For more information on how to train a NER model using SpaCy, please refer to the SpaCy documentation." ]
[ 27, 295 ]
[ "passage: TAGS\n#task_categories-token-classification #language-Ukrainian #legal #region-us \n### Dataset Description\n\nLegal Contracts Dataset for Training SpaCy NER Model\nThis repository contains a specially curated dataset consisting of legal contracts. It is designed for the purpose of training a Named Entity Recognition (NER) model using SpaCy, with the aim to recognize and classify four types of entities in the text:\n\nContract Type,\nClause Title,\nClause Number,\nDefinition Title\n\n\nThe dataset includes a broad variety of legal contracts, covering diverse domains such as employment, real estate, services, sale, lease, etc.\n\nEntities in the text have been manually labeled by experts in the field, ensuring high-quality training data for the model.\n\nEach document in the dataset has been annotated in the following format:\n\n(Start_Position, End_Position, Entity_Label)\n\nFor example, a clause title may be annotated as follows: (102, 115, 'clause title')\n\nThis will assist the NER model in identifying not only the text of the entity, but also its position within the document.\n\nUsage Guidelines\nThe dataset can be loaded into a SpaCy pipeline for training a NER model. For more information on how to train a NER model using SpaCy, please refer to the SpaCy documentation." ]
949c4cfa0bedf59211bf109c5eca59c9f1ad01e5
# g_Dorks [TXT dataset] A dataset comprising a collection of popular Google dorks acquired from various and numerous origins. ## Data Source ***Secret!*** ## Disclaimer Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
Tanvir1337/g_Dorks
[ "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "List", "Google", "GHDB", "Dork", "NSFW", "region:us" ]
2023-11-15T12:28:15+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "pretty_name": "Google Dorks", "tags": ["List", "Google", "GHDB", "Dork", "NSFW"]}
2023-12-13T08:18:11+00:00
[]
[ "en" ]
TAGS #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #List #Google #GHDB #Dork #NSFW #region-us
# g_Dorks [TXT dataset] A dataset comprising a collection of popular Google dorks acquired from various and numerous origins. ## Data Source *Secret!* ## Disclaimer Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
[ "# g_Dorks [TXT dataset]\n\nA dataset comprising a collection of popular Google dorks acquired from various and numerous origins.", "## Data Source\n\n*Secret!*", "## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality." ]
[ "TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #List #Google #GHDB #Dork #NSFW #region-us \n", "# g_Dorks [TXT dataset]\n\nA dataset comprising a collection of popular Google dorks acquired from various and numerous origins.", "## Data Source\n\n*Secret!*", "## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality." ]
[ 44, 33, 7, 73 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #List #Google #GHDB #Dork #NSFW #region-us \n# g_Dorks [TXT dataset]\n\nA dataset comprising a collection of popular Google dorks acquired from various and numerous origins.## Data Source\n\n*Secret!*## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality." ]
316b67b9436a1a25fa8aa42fb64657892db1ce19
# Dataset Card for "FulaNewsTextCorpora" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cawoylel/FulaNewsTextCorpora
[ "region:us" ]
2023-11-15T12:40:02+00:00
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19228210, "num_examples": 4142}], "download_size": 10646622, "dataset_size": 19228210}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T12:40:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "FulaNewsTextCorpora" More Information needed
[ "# Dataset Card for \"FulaNewsTextCorpora\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"FulaNewsTextCorpora\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"FulaNewsTextCorpora\"\n\nMore Information needed" ]
c4bf6112bbec28ce9e7de636815b93ef357a68e7
# Dataset Card for "no_robots_german" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
flozi00/no_robots_german
[ "region:us" ]
2023-11-15T12:41:28+00:00
{"dataset_info": {"features": [{"name": "messages", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14328669, "num_examples": 9499}], "download_size": 8502633, "dataset_size": 14328669}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-21T08:13:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "no_robots_german" More Information needed
[ "# Dataset Card for \"no_robots_german\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"no_robots_german\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"no_robots_german\"\n\nMore Information needed" ]
85bd8e102722951b6765dfeacef3da6561e27194
# Dataset Card for "semeval-task-8-b-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kpriyanshu256/semeval-task-8-b-v2
[ "region:us" ]
2023-11-15T12:42:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 121567544, "num_examples": 56821}, {"name": "val", "num_bytes": 30000447, "num_examples": 14206}, {"name": "test", "num_bytes": 4814312, "num_examples": 3000}], "download_size": 88174421, "dataset_size": 156382303}}
2023-11-15T12:42:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "semeval-task-8-b-v2" More Information needed
[ "# Dataset Card for \"semeval-task-8-b-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"semeval-task-8-b-v2\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-b-v2\"\n\nMore Information needed" ]
b4deb324016cba5ec16774c6ef8744329a9c5e71
# Dataset Card for "librispeech_asr_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/librispeech_asr_test
[ "region:us" ]
2023-11-15T12:47:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 31486926, "num_examples": 2620}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 31486926, "num_examples": 2620}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 47043726, "num_examples": 2620}, {"name": "audiodec_24k_320d", "num_bytes": 100100814, "num_examples": 2620}, {"name": "dac_16k", "num_bytes": 194813390, "num_examples": 2620}, {"name": "dac_24k", "num_bytes": 548721230, "num_examples": 2620}, {"name": "dac_44k", "num_bytes": 163255358, "num_examples": 2620}, {"name": "encodec_24k", "num_bytes": 23734398, "num_examples": 2620}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 250099790, "num_examples": 2620}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 250099790, "num_examples": 2620}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 249923918, "num_examples": 2620}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 125470798, "num_examples": 2620}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 249923918, "num_examples": 2620}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 249923918, "num_examples": 2620}, {"name": "speech_tokenizer_16k", "num_bytes": 62745230, "num_examples": 2620}], "download_size": 406443893, "dataset_size": 2578830130}}
2023-11-18T10:51:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "librispeech_asr_test" More Information needed
[ "# Dataset Card for \"librispeech_asr_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"librispeech_asr_test\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"librispeech_asr_test\"\n\nMore Information needed" ]
e479c432c4ee631b74339df7cfe457a3a54344a1
This dataset is a sub-set of 'The Project Gutenberg' that only focuses on Geography text. Books: 11M of tokens - The 1990 CIA World Factbook - Commercial Geography - Influences of Geographic Environment - Geographical etymology: a dictionary of place-names giving their derivations - Geography and Plays - Physical Geography
miguel-kjh/Geography_books_dataset
[ "size_categories:10M<n<100M", "language:en", "license:mit", "region:us" ]
2023-11-15T12:54:17+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10M<n<100M"], "pretty_name": "Geo text"}
2023-11-15T16:34:24+00:00
[]
[ "en" ]
TAGS #size_categories-10M<n<100M #language-English #license-mit #region-us
This dataset is a sub-set of 'The Project Gutenberg' that only focuses on Geography text. Books: 11M of tokens - The 1990 CIA World Factbook - Commercial Geography - Influences of Geographic Environment - Geographical etymology: a dictionary of place-names giving their derivations - Geography and Plays - Physical Geography
[]
[ "TAGS\n#size_categories-10M<n<100M #language-English #license-mit #region-us \n" ]
[ 27 ]
[ "passage: TAGS\n#size_categories-10M<n<100M #language-English #license-mit #region-us \n" ]
28176664ce996c927d16090bb4b02cd38fb3eb89
# Dataset Card for "ultrafeedback-prompts-judgelm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gabrielmbmb/ultrafeedback-prompts-judgelm
[ "region:us" ]
2023-11-15T12:55:19+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "generation_model", "dtype": "string"}, {"name": "generation_prompt", "dtype": "string"}, {"name": "raw_generation_responses", "sequence": "string"}, {"name": "generations", "sequence": "string"}, {"name": "labelling_model", "dtype": "string"}, {"name": "labelling_prompt", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "raw_labelling_response", "dtype": "string"}, {"name": "ratings", "sequence": "int64"}, {"name": "rationale", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13566605, "num_examples": 1000}], "download_size": 6407911, "dataset_size": 13566605}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T13:22:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrafeedback-prompts-judgelm" More Information needed
[ "# Dataset Card for \"ultrafeedback-prompts-judgelm\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrafeedback-prompts-judgelm\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ultrafeedback-prompts-judgelm\"\n\nMore Information needed" ]
2aeb55a98766b4ddcdc1de2addc5d99d43d0dccd
# Dataset Card for "mrpc_llama_2_v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
VitaliiVrublevskyi/mrpc_llama_2_v3
[ "region:us" ]
2023-11-15T13:04:37+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5399370, "num_examples": 22980}, {"name": "validation", "num_bytes": 109143, "num_examples": 408}, {"name": "test", "num_bytes": 456210, "num_examples": 1725}], "download_size": 1509295, "dataset_size": 5964723}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-11-15T13:05:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mrpc_llama_2_v3" More Information needed
[ "# Dataset Card for \"mrpc_llama_2_v3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mrpc_llama_2_v3\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mrpc_llama_2_v3\"\n\nMore Information needed" ]
88e2450a80f5a1953382ba8a16dc1fe4ecb834d5
# Dataset Card for "parlament_parla_ecapa_emb" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
wetdog/parlament_parla_ecapa_emb
[ "region:us" ]
2023-11-15T13:16:16+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}, {"name": "gender", "dtype": {"class_label": {"names": {"0": "F", "1": "M"}}}}, {"name": "duration", "dtype": "float64"}, {"name": "embeddings", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 140554656, "num_examples": 78976}, {"name": "validation", "num_bytes": 3802467, "num_examples": 2150}, {"name": "test", "num_bytes": 3783863, "num_examples": 2138}], "download_size": 133275777, "dataset_size": 148140986}}
2023-11-20T11:39:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "parlament_parla_ecapa_emb" More Information needed
[ "# Dataset Card for \"parlament_parla_ecapa_emb\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"parlament_parla_ecapa_emb\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"parlament_parla_ecapa_emb\"\n\nMore Information needed" ]
c65adc1392541ee8e2bb2aa8f62b7d81e8b08a83
# Dataset Card for "indian_food" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Srikar6476/indian_food
[ "region:us" ]
2023-11-15T13:18:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "burger", "1": "butter_naan", "2": "chai", "3": "chapati", "4": "chole_bhature", "5": "dal_makhani", "6": "dhokla", "7": "fried_rice", "8": "idli", "9": "jalebi", "10": "kaathi_rolls", "11": "kadai_paneer", "12": "kulfi", "13": "masala_dosa", "14": "momos", "15": "paani_puri", "16": "pakode", "17": "pav_bhaji", "18": "pizza", "19": "samosa"}}}}], "splits": [{"name": "train", "num_bytes": 1586188582.7514334, "num_examples": 5328}, {"name": "test", "num_bytes": 223888796.3925666, "num_examples": 941}], "download_size": 1601964166, "dataset_size": 1810077379.144}}
2023-11-18T15:39:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "indian_food" More Information needed
[ "# Dataset Card for \"indian_food\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"indian_food\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"indian_food\"\n\nMore Information needed" ]
fe541bc328b966cf2bbd0cab662d9a98b9aa1cd0
# Dataset Card for "zero_shot_comparison" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
harpreetsahota/zero_shot_comparison
[ "region:us" ]
2023-11-15T13:33:27+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "decilm_generation", "dtype": "string"}, {"name": "mistral_generation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 67718, "num_examples": 30}], "download_size": 54407, "dataset_size": 67718}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-11T16:05:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "zero_shot_comparison" More Information needed
[ "# Dataset Card for \"zero_shot_comparison\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"zero_shot_comparison\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"zero_shot_comparison\"\n\nMore Information needed" ]
4f46aa2d9e1510115d6f4d2d6965eae829ed9fe9
# Dataset Card for "train_test_valid_wettekst" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bassie96code/train_test_valid_wettekst
[ "region:us" ]
2023-11-15T13:37:39+00:00
{"dataset_info": {"features": [{"name": "tok_wettekst", "sequence": "string"}, {"name": "aantal tokens", "dtype": "int64"}, {"name": "label lijsten", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6272, "num_examples": 10}], "download_size": 4886, "dataset_size": 6272}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T13:50:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "train_test_valid_wettekst" More Information needed
[ "# Dataset Card for \"train_test_valid_wettekst\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"train_test_valid_wettekst\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"train_test_valid_wettekst\"\n\nMore Information needed" ]
16cff59e3adc868217d1c99296004c5267b37ebe
# Dataset Card for "train_wettekst" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bassie96code/train_wettekst
[ "region:us" ]
2023-11-15T13:42:59+00:00
{"dataset_info": {"features": [{"name": "tok_wettekst", "sequence": "string"}, {"name": "aantal tokens", "dtype": "int64"}, {"name": "label lijsten", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 32598, "num_examples": 80}], "download_size": 10866, "dataset_size": 32598}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T13:43:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "train_wettekst" More Information needed
[ "# Dataset Card for \"train_wettekst\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"train_wettekst\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"train_wettekst\"\n\nMore Information needed" ]
ee36931dedc295342b17edbcad1e4cb92e7835f0
# Dataset Card for "imdb-sample2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
paduraru2009/imdb-sample2
[ "region:us" ]
2023-11-15T13:43:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40107731, "num_examples": 30000}, {"name": "validation", "num_bytes": 39127084, "num_examples": 30000}], "download_size": 50593468, "dataset_size": 79234815}}
2023-11-15T13:44:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "imdb-sample2" More Information needed
[ "# Dataset Card for \"imdb-sample2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"imdb-sample2\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"imdb-sample2\"\n\nMore Information needed" ]
a2972d7fbcc1ea34f724a1d23c6f6b1da174a350
This is a dataset
Starset/test-dataset
[ "language:zh", "region:us" ]
2023-11-15T13:45:18+00:00
{"language": ["zh"]}
2023-11-15T13:59:03+00:00
[]
[ "zh" ]
TAGS #language-Chinese #region-us
This is a dataset
[]
[ "TAGS\n#language-Chinese #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#language-Chinese #region-us \n" ]
6da888f9a47527635dfb06c08a7d7db8a67f9182
# Dataset Card for "pubmed-2shot-4096" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
anumafzal94/pubmed-2shot-4096
[ "region:us" ]
2023-11-15T13:47:10+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "few-shot", "dtype": "bool"}], "splits": [{"name": "test", "num_bytes": 8149116.593446602, "num_examples": 426}, {"name": "train", "num_bytes": 139802654.7469022, "num_examples": 7242}], "download_size": 20828412, "dataset_size": 147951771.3403488}}
2023-11-17T10:45:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pubmed-2shot-4096" More Information needed
[ "# Dataset Card for \"pubmed-2shot-4096\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pubmed-2shot-4096\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"pubmed-2shot-4096\"\n\nMore Information needed" ]
9f32ba71d9a85f15754c84176274c1140b522bbb
# Dataset Card for "wettekst_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bassie96code/wettekst_train
[ "region:us" ]
2023-11-15T13:52:10+00:00
{"dataset_info": {"features": [{"name": "tok_wettekst", "sequence": "string"}, {"name": "aantal tokens", "dtype": "int64"}, {"name": "label lijsten", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 35157, "num_examples": 80}], "download_size": 11436, "dataset_size": 35157}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-30T13:41:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wettekst_train" More Information needed
[ "# Dataset Card for \"wettekst_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wettekst_train\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wettekst_train\"\n\nMore Information needed" ]
40e1c5a56bd4a96e3b8dae51a23ace9ea2045a6a
# Dataset Card for "wettekst_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bassie96code/wettekst_test
[ "region:us" ]
2023-11-15T13:52:20+00:00
{"dataset_info": {"features": [{"name": "tok_wettekst", "sequence": "string"}, {"name": "aantal tokens", "dtype": "int64"}, {"name": "label lijsten", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3714, "num_examples": 10}], "download_size": 4191, "dataset_size": 3714}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-30T13:41:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wettekst_test" More Information needed
[ "# Dataset Card for \"wettekst_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wettekst_test\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wettekst_test\"\n\nMore Information needed" ]
d2308666047c55ee8472f22f22972fc0bd365464
# Dataset Card for "wettekst_valid" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bassie96code/wettekst_valid
[ "region:us" ]
2023-11-15T13:52:30+00:00
{"dataset_info": {"features": [{"name": "tok_wettekst", "sequence": "string"}, {"name": "aantal tokens", "dtype": "int64"}, {"name": "label lijsten", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3632, "num_examples": 10}], "download_size": 4145, "dataset_size": 3632}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-30T13:41:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wettekst_valid" More Information needed
[ "# Dataset Card for \"wettekst_valid\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wettekst_valid\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wettekst_valid\"\n\nMore Information needed" ]
f97cd45662d305d2f12de0734f47caaa247d1557
# Dataset Card for "easy-german-definitions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jmelsbach/easy-german-definitions
[ "region:us" ]
2023-11-15T13:56:15+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "detailed_explanation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2153588.053902302, "num_examples": 2849}, {"name": "test", "num_bytes": 538963.946097698, "num_examples": 713}], "download_size": 0, "dataset_size": 2692552.0}}
2023-11-15T14:01:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "easy-german-definitions" More Information needed
[ "# Dataset Card for \"easy-german-definitions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"easy-german-definitions\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"easy-german-definitions\"\n\nMore Information needed" ]
3c566eb829e20cf101c7f93d0c7beb2ae4337af9
# Description This is a database of fifty hyperspectral images of indoor and outdoor scenes under daylight illumination, and an additional twenty-five images under artificial and mixed illumination. The images were captured using a commercial hyperspectral camera (Nuance FX, CRI Inc) with an integrated liquid crystal tunable filter capable of acquiring a hyperspectral image by sequentially tuning the filter through a series of thirty-one narrow wavelength bands, each with approximately 10nm bandwidth and centered at steps of 10nm from 420nm to 720nm. The camera is equipped with an apo-chromatic lens and the images were captured with the smallest viable aperture setting, thus largely avoiding chromatic aberration. All the images are of static scenes, with labels to mask out regions with movement during exposure. # Characteristics This real-world hyperspectral images database is being made available for non-commercial research use. Please see the README.txt file in each archive for details. It contains: - `CZ_hsdb`: 50 Indoor & outdoor images under daylight (5.3GB) - `CZ_hsdbi`: 27 Indoor images under artificial & mixed illumination (2.2GB) # Credits Originally available at: https://vision.seas.harvard.edu/hyperspec/download.html This database is available for non-commercial research use. The data is available as a series of MATLAB .mat files (one for each image) containing both the images data and masks. Since the size of the download is large (around 5.5 + 2.2 GB), the authors ask only minimize the number of times you download it directly from those servers. If you use this data in an academic publication, kindly cite the following paper: ``` Ayan Chakrabarti and Todd Zickler, "Statistics of Real-World Hyperspectral Images," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. ``` Bibtex: ``` @conference{chakrabarti2011statistics, title={{Statistics of Real-World Hyperspectral Images}}, author={Chakrabarti, A. and Zickler, T.}, booktitle={Proc.~IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)}, pages={193--200}, year={2011} } ```
danaroth/harvard
[ "license:unknown", "region:us" ]
2023-11-15T14:08:10+00:00
{"license": "unknown"}
2023-11-20T08:27:48+00:00
[]
[]
TAGS #license-unknown #region-us
# Description This is a database of fifty hyperspectral images of indoor and outdoor scenes under daylight illumination, and an additional twenty-five images under artificial and mixed illumination. The images were captured using a commercial hyperspectral camera (Nuance FX, CRI Inc) with an integrated liquid crystal tunable filter capable of acquiring a hyperspectral image by sequentially tuning the filter through a series of thirty-one narrow wavelength bands, each with approximately 10nm bandwidth and centered at steps of 10nm from 420nm to 720nm. The camera is equipped with an apo-chromatic lens and the images were captured with the smallest viable aperture setting, thus largely avoiding chromatic aberration. All the images are of static scenes, with labels to mask out regions with movement during exposure. # Characteristics This real-world hyperspectral images database is being made available for non-commercial research use. Please see the URL file in each archive for details. It contains: - 'CZ_hsdb': 50 Indoor & outdoor images under daylight (5.3GB) - 'CZ_hsdbi': 27 Indoor images under artificial & mixed illumination (2.2GB) # Credits Originally available at: URL This database is available for non-commercial research use. The data is available as a series of MATLAB .mat files (one for each image) containing both the images data and masks. Since the size of the download is large (around 5.5 + 2.2 GB), the authors ask only minimize the number of times you download it directly from those servers. If you use this data in an academic publication, kindly cite the following paper: Bibtex:
[ "# Description\n\nThis is a database of fifty hyperspectral images of indoor and outdoor scenes under daylight illumination, and an additional twenty-five images under artificial and mixed illumination. The images were captured using a commercial hyperspectral camera (Nuance FX, CRI Inc) with an integrated liquid crystal tunable filter capable of acquiring a hyperspectral image by sequentially tuning the filter through a series of thirty-one narrow wavelength bands, each with approximately 10nm bandwidth and centered at steps of 10nm from 420nm to 720nm. The camera is equipped with an apo-chromatic lens and the images were captured with the smallest viable aperture setting, thus largely avoiding chromatic aberration. All the images are of static scenes, with labels to mask out regions with movement during exposure.", "# Characteristics\n\nThis real-world hyperspectral images database is being made available for non-commercial research use. Please see the URL file in each archive for details.\nIt contains:\n - 'CZ_hsdb': 50 Indoor & outdoor images under daylight (5.3GB)\n - 'CZ_hsdbi': 27 Indoor images under artificial & mixed illumination (2.2GB)", "# Credits\n\nOriginally available at:\nURL\n\nThis database is available for non-commercial research use. The data is available as a series of MATLAB .mat files (one for each image) containing both the images data and masks. Since the size of the download is large (around 5.5 + 2.2 GB), the authors ask only minimize the number of times you download it directly from those servers. \n\nIf you use this data in an academic publication, kindly cite the following paper:\n\n\nBibtex:" ]
[ "TAGS\n#license-unknown #region-us \n", "# Description\n\nThis is a database of fifty hyperspectral images of indoor and outdoor scenes under daylight illumination, and an additional twenty-five images under artificial and mixed illumination. The images were captured using a commercial hyperspectral camera (Nuance FX, CRI Inc) with an integrated liquid crystal tunable filter capable of acquiring a hyperspectral image by sequentially tuning the filter through a series of thirty-one narrow wavelength bands, each with approximately 10nm bandwidth and centered at steps of 10nm from 420nm to 720nm. The camera is equipped with an apo-chromatic lens and the images were captured with the smallest viable aperture setting, thus largely avoiding chromatic aberration. All the images are of static scenes, with labels to mask out regions with movement during exposure.", "# Characteristics\n\nThis real-world hyperspectral images database is being made available for non-commercial research use. Please see the URL file in each archive for details.\nIt contains:\n - 'CZ_hsdb': 50 Indoor & outdoor images under daylight (5.3GB)\n - 'CZ_hsdbi': 27 Indoor images under artificial & mixed illumination (2.2GB)", "# Credits\n\nOriginally available at:\nURL\n\nThis database is available for non-commercial research use. The data is available as a series of MATLAB .mat files (one for each image) containing both the images data and masks. Since the size of the download is large (around 5.5 + 2.2 GB), the authors ask only minimize the number of times you download it directly from those servers. \n\nIf you use this data in an academic publication, kindly cite the following paper:\n\n\nBibtex:" ]
[ 13, 201, 91, 109 ]
[ "passage: TAGS\n#license-unknown #region-us \n# Description\n\nThis is a database of fifty hyperspectral images of indoor and outdoor scenes under daylight illumination, and an additional twenty-five images under artificial and mixed illumination. The images were captured using a commercial hyperspectral camera (Nuance FX, CRI Inc) with an integrated liquid crystal tunable filter capable of acquiring a hyperspectral image by sequentially tuning the filter through a series of thirty-one narrow wavelength bands, each with approximately 10nm bandwidth and centered at steps of 10nm from 420nm to 720nm. The camera is equipped with an apo-chromatic lens and the images were captured with the smallest viable aperture setting, thus largely avoiding chromatic aberration. All the images are of static scenes, with labels to mask out regions with movement during exposure.# Characteristics\n\nThis real-world hyperspectral images database is being made available for non-commercial research use. Please see the URL file in each archive for details.\nIt contains:\n - 'CZ_hsdb': 50 Indoor & outdoor images under daylight (5.3GB)\n - 'CZ_hsdbi': 27 Indoor images under artificial & mixed illumination (2.2GB)# Credits\n\nOriginally available at:\nURL\n\nThis database is available for non-commercial research use. The data is available as a series of MATLAB .mat files (one for each image) containing both the images data and masks. Since the size of the download is large (around 5.5 + 2.2 GB), the authors ask only minimize the number of times you download it directly from those servers. \n\nIf you use this data in an academic publication, kindly cite the following paper:\n\n\nBibtex:" ]
87ee2ed8ef7240c9578e7b038323b321d36144d4
# Usage madlad-400-udmurt ```py from datasets import load_dataset dataset = load_dataset("udmurtNLP/madlad-400-udmurt") ```
udmurtNLP/madlad-400-udmurt
[ "size_categories:100K<n<1M", "language:udm", "region:us" ]
2023-11-15T14:16:32+00:00
{"language": ["udm"], "size_categories": ["100K<n<1M"], "dataset_info": {"features": [{"name": "sent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 102168566, "num_examples": 651456}], "download_size": 52503390, "dataset_size": 102168566}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T14:29:24+00:00
[]
[ "udm" ]
TAGS #size_categories-100K<n<1M #language-Udmurt #region-us
# Usage madlad-400-udmurt
[ "# Usage madlad-400-udmurt" ]
[ "TAGS\n#size_categories-100K<n<1M #language-Udmurt #region-us \n", "# Usage madlad-400-udmurt" ]
[ 25, 10 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-Udmurt #region-us \n# Usage madlad-400-udmurt" ]
6a555cd4f00d3381d3bc52e3df1af8258826cab1
# Dataset Card for "one_shot_comparison" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
harpreetsahota/one_shot_comparison
[ "region:us" ]
2023-11-15T14:22:47+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "decilm_generation", "dtype": "string"}, {"name": "mistral_generation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75960, "num_examples": 30}], "download_size": 53919, "dataset_size": 75960}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-11T16:20:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "one_shot_comparison" More Information needed
[ "# Dataset Card for \"one_shot_comparison\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"one_shot_comparison\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"one_shot_comparison\"\n\nMore Information needed" ]
d805614f668b18cab0296aef06daaf55fefaa3b7
# Dataset Card for "patent_v3.1_merged" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nguyenthanhdo/patent_v3.1_merged
[ "region:us" ]
2023-11-15T14:25:26+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120966768.0893321, "num_examples": 100488}], "download_size": 81294619, "dataset_size": 120966768.0893321}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T14:25:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "patent_v3.1_merged" More Information needed
[ "# Dataset Card for \"patent_v3.1_merged\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"patent_v3.1_merged\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"patent_v3.1_merged\"\n\nMore Information needed" ]
69edb2840ad01324ff77d10d31a273c109e78d2f
# Dataset Card for "ultrafeedback-prompts-judgelm-gpt35" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gabrielmbmb/ultrafeedback-prompts-judgelm-gpt35
[ "region:us" ]
2023-11-15T14:26:25+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "generation_model", "dtype": "string"}, {"name": "generation_prompt", "dtype": "string"}, {"name": "raw_generation_responses", "sequence": "string"}, {"name": "generations", "sequence": "string"}, {"name": "labelling_model", "dtype": "string"}, {"name": "labelling_prompt", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "raw_labelling_response", "dtype": "string"}, {"name": "ratings", "sequence": "int64"}, {"name": "rationale", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13479349, "num_examples": 1000}], "download_size": 6250632, "dataset_size": 13479349}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T14:36:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrafeedback-prompts-judgelm-gpt35" More Information needed
[ "# Dataset Card for \"ultrafeedback-prompts-judgelm-gpt35\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrafeedback-prompts-judgelm-gpt35\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ultrafeedback-prompts-judgelm-gpt35\"\n\nMore Information needed" ]
9e82c819a1af2012af1c809833c0aed4ce889ada
# Dataset Card for "wikitext-103-raw-v1_gpt2-20k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pietrolesci/wikitext-103-raw-v1_gpt2-20k
[ "region:us" ]
2023-11-15T14:27:55+00:00
{"dataset_info": [{"config_name": "packed", "features": [{"name": "input_ids", "sequence": "int64"}, {"name": "uid", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 2313688, "num_examples": 1124}, {"name": "train", "num_bytes": 968728180, "num_examples": 470257}, {"name": "validation", "num_bytes": 2027524, "num_examples": 985}], "download_size": 0, "dataset_size": 973069392}, {"config_name": "surprisals", "features": [{"name": "surprisals", "sequence": "float64"}, {"name": "token_ids", "sequence": "int64"}, {"name": "uids", "dtype": "int64"}, {"name": "batch_idx", "dtype": "int64"}, {"name": "step", "dtype": "int64"}], "splits": [{"name": "step_10999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_10999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_3999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_3999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_0_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_0_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_4999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_4999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_1999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_1999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_8999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_8999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_7999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_7999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_13999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_13999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_2999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_2999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_11999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_11999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_12999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_12999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_6999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_6999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_9999_train", "num_bytes": 1933696784, "num_examples": 470257}, {"name": "step_9999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_5999_validation", "num_bytes": 4050320, "num_examples": 985}, {"name": "step_5999_train", "num_bytes": 1933696784, "num_examples": 470257}], "download_size": 21176694739, "dataset_size": 30999903344}], "configs": [{"config_name": "packed", "data_files": [{"split": "test", "path": "packed/test-*"}, {"split": "train", "path": "packed/train-*"}, {"split": "validation", "path": "packed/validation-*"}]}, {"config_name": "surprisals", "data_files": [{"split": "step_10999_validation", "path": "surprisals/step_10999_validation-*"}, {"split": "step_10999_train", "path": "surprisals/step_10999_train-*"}, {"split": "step_3999_validation", "path": "surprisals/step_3999_validation-*"}, {"split": "step_3999_train", "path": "surprisals/step_3999_train-*"}, {"split": "step_0_validation", "path": "surprisals/step_0_validation-*"}, {"split": "step_0_train", "path": "surprisals/step_0_train-*"}, {"split": "step_999_validation", "path": "surprisals/step_999_validation-*"}, {"split": "step_999_train", "path": "surprisals/step_999_train-*"}, {"split": "step_4999_train", "path": "surprisals/step_4999_train-*"}, {"split": "step_4999_validation", "path": "surprisals/step_4999_validation-*"}, {"split": "step_1999_train", "path": "surprisals/step_1999_train-*"}, {"split": "step_1999_validation", "path": "surprisals/step_1999_validation-*"}, {"split": "train", "path": "surprisals/train-*"}, {"split": "step_8999_train", "path": "surprisals/step_8999_train-*"}, {"split": "step_8999_validation", "path": "surprisals/step_8999_validation-*"}, {"split": "step_7999_train", "path": "surprisals/step_7999_train-*"}, {"split": "step_7999_validation", "path": "surprisals/step_7999_validation-*"}, {"split": "step_13999_train", "path": "surprisals/step_13999_train-*"}, {"split": "step_13999_validation", "path": "surprisals/step_13999_validation-*"}, {"split": "step_2999_validation", "path": "surprisals/step_2999_validation-*"}, {"split": "step_2999_train", "path": "surprisals/step_2999_train-*"}, {"split": "step_11999_train", "path": "surprisals/step_11999_train-*"}, {"split": "step_11999_validation", "path": "surprisals/step_11999_validation-*"}, {"split": "step_12999_validation", "path": "surprisals/step_12999_validation-*"}, {"split": "step_12999_train", "path": "surprisals/step_12999_train-*"}, {"split": "step_6999_train", "path": "surprisals/step_6999_train-*"}, {"split": "step_6999_validation", "path": "surprisals/step_6999_validation-*"}, {"split": "step_9999_train", "path": "surprisals/step_9999_train-*"}, {"split": "step_9999_validation", "path": "surprisals/step_9999_validation-*"}, {"split": "step_5999_validation", "path": "surprisals/step_5999_validation-*"}, {"split": "step_5999_train", "path": "surprisals/step_5999_train-*"}]}]}
2023-11-16T12:48:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikitext-103-raw-v1_gpt2-20k" More Information needed
[ "# Dataset Card for \"wikitext-103-raw-v1_gpt2-20k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikitext-103-raw-v1_gpt2-20k\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikitext-103-raw-v1_gpt2-20k\"\n\nMore Information needed" ]
a4289eb326d21328ced8e5cbeb8d1af09dbf81ab
# Description ICVL is a hyperspectral image dataset, collected by "[Sparse Recovery of Hyperspectral Signal from Natural RGB Images](http://link.springer.com/chapter/10.1007/978-3-319-46478-7_2)" The database images were acquired using a Specim PS Kappa DX4 hyperspectral camera and a rotary stage for spatial scanning. At this time it contains 200 images and will continue to grow progressively. Images were collected at 1392 $\times$ 1300 spatial resolution over 519 spectral bands (400-1,000nm at roughly 1.25nm increments). The .raw files contain raw out-of-camera data in ENVI format and .hdr files contain the headers required to decode them. For your convenience, .mat files are provided, downsampled to 31 spectral channels from 400nm to 700nm at 10nm increments. The original dataset only contains clean images. For hyperspectral image denoising benchmarks, the testing datasets come from "3D Quasi-Recurrent Neural Network for Hyperspectral Image Denoising". # Quick look | | | | | |---|---------|-----------|-------------| | ![4cam_0411-1640-1](assets/4cam_0411-1640-1-200x215.jpg) |![4cam_0411-1648](assets/4cam_0411-1648-200x215.jpg) |![bguCAMP_0514-1659](assets/bguCAMP_0514-1659-200x215.jpg) |![bguCAMP_0514-1711](assets/bguCAMP_0514-1711-200x215.jpg) | | 4cam_0411-1640-1 |4cam_0411-1648 |bguCAMP_0514-1659 |bguCAMP_0514-1711 | | ![bguCAMP_0514-1712](assets/bguCAMP_0514-1712-200x215.jpg) |![bguCAMP_0514-1718](assets/bguCAMP_0514-1718-200x215.jpg) |![bguCAMP_0514-1723](assets/bguCAMP_0514-1723-200x215.jpg) |![bguCAMP_0514-1724](assets/bguCAMP_0514-1724-200x215.jpg) | | bguCAMP_0514-1712 |bguCAMP_0514-1718 |bguCAMP_0514-1723 |bguCAMP_0514-1724 | | ![BGU_0403-1419-1](assets/BGU_0403-1419-1-200x215.jpg) |![bgu_0403-1439](assets/bgu_0403-1439-200x215.jpg) |![bgu_0403-1444](assets/bgu_0403-1444-200x215.jpg) |![bgu_0403-1459](assets/bgu_0403-1459-200x215.jpg) | | BGU_0403-1419-1 |bgu_0403-1439 |bgu_0403-1444 |bgu_0403-1459 | | ![bgu_0403-1511](assets/bgu_0403-1511-200x215.jpg) |![bgu_0403-1523](assets/bgu_0403-1523-200x215.jpg) |![bgu_0403-1525](assets/bgu_0403-1525-200x215.jpg) |![BGU_0522-1113-1](assets/BGU_0522-1113-1-200x215.jpg) | | bgu_0403-1511 |bgu_0403-1523 |bgu_0403-1525 |BGU_0522-1113-1 | | ![BGU_0522-1127](assets/BGU_0522-1127-200x215.jpg) |![BGU_0522-1136](assets/BGU_0522-1136-200x215.jpg) |![BGU_0522-1201](assets/BGU_0522-1201-200x215.jpg) |![BGU_0522-1203](assets/BGU_0522-1203-200x215.jpg) | | BGU_0522-1127 |BGU_0522-1136 |BGU_0522-1201 |BGU_0522-1203 | | ![BGU_0522-1211](assets/BGU_0522-1211-200x215.jpg) |![BGU_0522-1216](assets/BGU_0522-1216-200x215.jpg) |![BGU_0522-1217](assets/BGU_0522-1217-200x215.jpg) |![bulb_0822-0903](assets/bulb_0822-0903-200x215.jpg) | | BGU_0522-1211 |BGU_0522-1216 |BGU_0522-1217 |bulb_0822-0903 | | ![bulb_0822-0909](assets/bulb_0822-0909-200x215.jpg) |![CC_40D_2_1103-0917](assets/CC_40D_2_1103-0917-200x215.jpg) |![eve_0331-1549](assets/eve_0331-1549-200x215.jpg) |![eve_0331-1551](assets/eve_0331-1551-200x215.jpg) | | bulb_0822-0909 |CC_40D_2_1103-0917 |eve_0331-1549 |eve_0331-1551 | | ![eve_0331-1601](assets/eve_0331-1601-200x215.jpg) |![eve_0331-1602](assets/eve_0331-1602-200x215.jpg) |![eve_0331-1606](assets/eve_0331-1606-200x215.jpg) |![eve_0331-1618](assets/eve_0331-1618-200x215.jpg) | | eve_0331-1601 |eve_0331-1602 |eve_0331-1606 |eve_0331-1618 | | ![eve_0331-1632](assets/eve_0331-1632-200x215.jpg) |![eve_0331-1633](assets/eve_0331-1633-200x215.jpg) |![eve_0331-1646](assets/eve_0331-1646-200x215.jpg) |![eve_0331-1647](assets/eve_0331-1647-200x215.jpg) | | eve_0331-1632 |eve_0331-1633 |eve_0331-1646 |eve_0331-1647 | | ![eve_0331-1656](assets/eve_0331-1656-200x215.jpg) |![eve_0331-1657](assets/eve_0331-1657-200x215.jpg) |![eve_0331-1702](assets/eve_0331-1702-200x215.jpg) |![eve_0331-1705](assets/eve_0331-1705-200x215.jpg) | | eve_0331-1656 |eve_0331-1657 |eve_0331-1702 |eve_0331-1705 | | ![Flower_0325-1336](assets/Flower_0325-1336-200x215.jpg) |![gavyam_0823-0930](assets/gavyam_0823-0930-200x215.jpg) |![gavyam_0823-0933](assets/gavyam_0823-0933-200x215.jpg) |![gavyam_0823-0944](assets/gavyam_0823-0944-200x215.jpg) | | Flower_0325-1336 |gavyam_0823-0930 |gavyam_0823-0933 |gavyam_0823-0944 | | ![gavyam_0823-0945](assets/gavyam_0823-0945-200x215.jpg) |![gavyam_0823-0950-1](assets/gavyam_0823-0950-1-200x215.jpg) |![grf_0328-0949](assets/grf_0328-0949-200x215.jpg) |![hill_0325-1219](assets/hill_0325-1219-200x215.jpg) | | gavyam_0823-0945 |gavyam_0823-0950-1 |grf_0328-0949 |hill_0325-1219 | | ![hill_0325-1228](assets/hill_0325-1228-200x215.jpg) |![hill_0325-1235](assets/hill_0325-1235-200x215.jpg) |![hill_0325-1242](assets/hill_0325-1242-200x215.jpg) |![IDS_COLORCHECK_1020-1215-1](assets/IDS_COLORCHECK_1020-1215-1-200x215.jpg) | | hill_0325-1228 |hill_0325-1235 |hill_0325-1242 |IDS_COLORCHECK_1020-1215-1 | | ![IDS_COLORCHECK_1020-1223](assets/IDS_COLORCHECK_1020-1223-200x215.jpg) |![Labtest_0910-1502](assets/Labtest_0910-1502-200x215.jpg) |![Labtest_0910-1504](assets/Labtest_0910-1504-200x215.jpg) |![Labtest_0910-1506](assets/Labtest_0910-1506-200x215.jpg) | | IDS_COLORCHECK_1020-1223 |Labtest_0910-1502 |Labtest_0910-1504 |Labtest_0910-1506 | | ![Labtest_0910-1509](assets/Labtest_0910-1509-200x215.jpg) |![Labtest_0910-1510](assets/Labtest_0910-1510-200x215.jpg) |![Labtest_0910-1511](assets/Labtest_0910-1511-200x215.jpg) |![Labtest_0910-1513](assets/Labtest_0910-1513-200x215.jpg) | | Labtest_0910-1509 |Labtest_0910-1510 |Labtest_0910-1511 |Labtest_0910-1513 | | ![lehavim_0910-1600](assets/lehavim_0910-1600-200x215.jpg) |![lehavim_0910-1602](assets/lehavim_0910-1602-200x215.jpg) |![lehavim_0910-1605](assets/lehavim_0910-1605-200x215.jpg) |![lehavim_0910-1607](assets/lehavim_0910-1607-200x215.jpg) | | lehavim_0910-1600 |lehavim_0910-1602 |lehavim_0910-1605 |lehavim_0910-1607 | | ![lehavim_0910-1610](assets/lehavim_0910-1610-200x215.jpg) |![Lehavim_0910-1622](assets/Lehavim_0910-1622-200x215.jpg) |![Lehavim_0910-1626](assets/Lehavim_0910-1626-200x215.jpg) |![Lehavim_0910-1627](assets/Lehavim_0910-1627-200x215.jpg) | | lehavim_0910-1610 |Lehavim_0910-1622 |Lehavim_0910-1626 |Lehavim_0910-1627 | | ![Lehavim_0910-1629](assets/Lehavim_0910-1629-200x215.jpg) |![Lehavim_0910-1630](assets/Lehavim_0910-1630-200x215.jpg) |![Lehavim_0910-1633](assets/Lehavim_0910-1633-200x215.jpg) |![Lehavim_0910-1635](assets/Lehavim_0910-1635-200x215.jpg) | | Lehavim_0910-1629 |Lehavim_0910-1630 |Lehavim_0910-1633 |Lehavim_0910-1635 | | ![Lehavim_0910-1636](assets/Lehavim_0910-1636-200x215.jpg) |![Lehavim_0910-1640](assets/Lehavim_0910-1640-200x215.jpg) |![Lehavim_0910-1708](assets/Lehavim_0910-1708-200x215.jpg) |![Lehavim_0910-1716](assets/Lehavim_0910-1716-200x215.jpg) | | Lehavim_0910-1636 |Lehavim_0910-1640 |Lehavim_0910-1708 |Lehavim_0910-1716 | | ![Lehavim_0910-1717](assets/Lehavim_0910-1717-200x215.jpg) |![Lehavim_0910-1718](assets/Lehavim_0910-1718-200x215.jpg) |![Lehavim_0910-1725](assets/Lehavim_0910-1725-200x215.jpg) |![lst_0408-0950](assets/lst_0408-0950-200x215.jpg) | | Lehavim_0910-1717 |Lehavim_0910-1718 |Lehavim_0910-1725 |lst_0408-0950 | | ![lst_0408-1004](assets/lst_0408-1004-200x215.jpg) |![lst_0408-1012](assets/lst_0408-1012-200x215.jpg) |![Master20150112_f2_colorchecker](assets/Master20150112_f2_colorchecker-200x215.jpg) |![Master2900k](assets/Master2900k-200x215.jpg) | | lst_0408-1004 |lst_0408-1012 |Master20150112_f2_colorchecker |Master2900k | | ![Master5000K](assets/Master5000K-200x215.jpg) |![Master5000K_2900K](assets/Master5000K_2900K-200x215.jpg) |![Maz0326-1038](assets/Maz0326-1038-200x215.jpg) |![maz_0326-1048](assets/maz_0326-1048-200x215.jpg) | | Master5000K |Master5000K_2900K |Maz0326-1038 |maz_0326-1048 | | ![mor_0328-1209-2](assets/mor_0328-1209-2-200x215.jpg) |![nachal_0823-1038](assets/nachal_0823-1038-200x215.jpg) |![nachal_0823-1040](assets/nachal_0823-1040-200x215.jpg) |![nachal_0823-1047](assets/nachal_0823-1047-200x215.jpg) | | mor_0328-1209-2 |nachal_0823-1038 |nachal_0823-1040 |nachal_0823-1047 | | ![nachal_0823-1110](assets/nachal_0823-1110-200x215.jpg) |![nachal_0823-1117](assets/nachal_0823-1117-200x215.jpg) |![nachal_0823-1118](assets/nachal_0823-1118-200x215.jpg) |![nachal_0823-1121](assets/nachal_0823-1121-200x215.jpg) | | nachal_0823-1110 |nachal_0823-1117 |nachal_0823-1118 |nachal_0823-1121 | | ![nachal_0823-1127](assets/nachal_0823-1127-200x215.jpg) |![nachal_0823-1132](assets/nachal_0823-1132-200x215.jpg) |![nachal_0823-1144](assets/nachal_0823-1144-200x215.jpg) |![nachal_0823-1145](assets/nachal_0823-1145-200x215.jpg) | | nachal_0823-1127 |nachal_0823-1132 |nachal_0823-1144 |nachal_0823-1145 | | ![nachal_0823-1147](assets/nachal_0823-1147-200x215.jpg) |![nachal_0823-1149](assets/nachal_0823-1149-200x215.jpg) |![nachal_0823-1152](assets/nachal_0823-1152-200x215.jpg) |![nachal_0823-1210-4](assets/nachal_0823-1210-4-200x215.jpg) | | nachal_0823-1147 |nachal_0823-1149 |nachal_0823-1152 |nachal_0823-1210-4 | | ![nachal_0823-1213](assets/nachal_0823-1213-200x215.jpg) |![nachal_0823-1214](assets/nachal_0823-1214-200x215.jpg) |![nachal_0823-1217](assets/nachal_0823-1217-200x215.jpg) |![nachal_0823-1220](assets/nachal_0823-1220-200x215.jpg) | | nachal_0823-1213 |nachal_0823-1214 |nachal_0823-1217 |nachal_0823-1220 | | ![nachal_0823-1222](assets/nachal_0823-1222-200x215.jpg) |![nachal_0823-1223](assets/nachal_0823-1223-200x215.jpg) |![negev_0823-1003](assets/negev_0823-1003-200x215.jpg) |![negev_0823-1005](assets/negev_0823-1005-200x215.jpg) | | nachal_0823-1222 |nachal_0823-1223 |negev_0823-1003 |negev_0823-1005 | | ![objects_0924-1550](assets/objects_0924-1550-200x215.jpg) |![objects_0924-1556](assets/objects_0924-1556-200x215.jpg) |![objects_0924-1557](assets/objects_0924-1557-200x215.jpg) |![objects_0924-1558](assets/objects_0924-1558-200x215.jpg) | | objects_0924-1550 |objects_0924-1556 |objects_0924-1557 |objects_0924-1558 | | ![objects_0924-1600](assets/objects_0924-1600-200x215.jpg) |![objects_0924-1601](assets/objects_0924-1601-200x215.jpg) |![objects_0924-1602](assets/objects_0924-1602-200x215.jpg) |![objects_0924-1605](assets/objects_0924-1605-200x215.jpg) | | objects_0924-1600 |objects_0924-1601 |objects_0924-1602 |objects_0924-1605 | | ![objects_0924-1607](assets/objects_0924-1607-200x215.jpg) |![objects_0924-1610](assets/objects_0924-1610-200x215.jpg) |![objects_0924-1611](assets/objects_0924-1611-200x215.jpg) |![objects_0924-1612](assets/objects_0924-1612-200x215.jpg) | | objects_0924-1607 |objects_0924-1610 |objects_0924-1611 |objects_0924-1612 | | ![objects_0924-1614](assets/objects_0924-1614-200x215.jpg) |![objects_0924-1617](assets/objects_0924-1617-200x215.jpg) |![objects_0924-1619](assets/objects_0924-1619-200x215.jpg) |![objects_0924-1620](assets/objects_0924-1620-200x215.jpg) | | objects_0924-1614 |objects_0924-1617 |objects_0924-1619 |objects_0924-1620 | | ![objects_0924-1622](assets/objects_0924-1622-200x215.jpg) |![objects_0924-1628](assets/objects_0924-1628-200x215.jpg) |![objects_0924-1629](assets/objects_0924-1629-200x215.jpg) |![objects_0924-1631](assets/objects_0924-1631-200x215.jpg) | | objects_0924-1622 |objects_0924-1628 |objects_0924-1629 |objects_0924-1631 | | ![objects_0924-1632](assets/objects_0924-1632-200x215.jpg) |![objects_0924-1633](assets/objects_0924-1633-200x215.jpg) |![objects_0924-1634](assets/objects_0924-1634-200x215.jpg) |![objects_0924-1636](assets/objects_0924-1636-200x215.jpg) | | objects_0924-1632 |objects_0924-1633 |objects_0924-1634 |objects_0924-1636 | | ![objects_0924-1637](assets/objects_0924-1637-200x215.jpg) |![objects_0924-1638](assets/objects_0924-1638-200x215.jpg) |![objects_0924-1639](assets/objects_0924-1639-200x215.jpg) |![objects_0924-1641](assets/objects_0924-1641-200x215.jpg) | | objects_0924-1637 |objects_0924-1638 |objects_0924-1639 |objects_0924-1641 | | ![objects_0924-1645](assets/objects_0924-1645-200x215.jpg) |![objects_0924-1648](assets/objects_0924-1648-200x215.jpg) |![objects_0924-1650](assets/objects_0924-1650-200x215.jpg) |![objects_0924-1652](assets/objects_0924-1652-200x215.jpg) | | objects_0924-1645 |objects_0924-1648 |objects_0924-1650 |objects_0924-1652 | | ![omer_0331-1055](assets/omer_0331-1055-200x215.jpg) |![omer_0331-1102](assets/omer_0331-1102-200x215.jpg) |![omer_0331-1104](assets/omer_0331-1104-200x215.jpg) |![omer_0331-1118](assets/omer_0331-1118-200x215.jpg) | | omer_0331-1055 |omer_0331-1102 |omer_0331-1104 |omer_0331-1118 | | ![omer_0331-1119](assets/omer_0331-1119-200x215.jpg) |![omer_0331-1130](assets/omer_0331-1130-200x215.jpg) |![omer_0331-1131](assets/omer_0331-1131-200x215.jpg) |![omer_0331-1135](assets/omer_0331-1135-200x215.jpg) | | omer_0331-1119 |omer_0331-1130 |omer_0331-1131 |omer_0331-1135 | | ![omer_0331-1150](assets/omer_0331-1150-200x215.jpg) |![omer_0331-1159](assets/omer_0331-1159-200x215.jpg) |![peppers_0503-1308](assets/peppers_0503-1308-200x215.jpg) |![peppers_0503-1311](assets/peppers_0503-1311-200x215.jpg) | | omer_0331-1150 |omer_0331-1159 |peppers_0503-1308 |peppers_0503-1311 | | ![peppers_0503-1315](assets/peppers_0503-1315-200x215.jpg) |![peppers_0503-1330](assets/peppers_0503-1330-200x215.jpg) |![peppers_0503-1332](assets/peppers_0503-1332-200x215.jpg) |![pepper_0503-1228](assets/pepper_0503-1228-200x215.jpg) | | peppers_0503-1315 |peppers_0503-1330 |peppers_0503-1332 |pepper_0503-1228 | | ![pepper_0503-1229](assets/pepper_0503-1229-200x215.jpg) |![pepper_0503-1236](assets/pepper_0503-1236-200x215.jpg) |![plt_0411-1037](assets/plt_0411-1037-200x215.jpg) |![plt_0411-1046](assets/plt_0411-1046-200x215.jpg) | | pepper_0503-1229 |pepper_0503-1236 |plt_0411-1037 |plt_0411-1046 | | ![plt_0411-1116](assets/plt_0411-1116-200x215.jpg) |![plt_0411-1155](assets/plt_0411-1155-200x215.jpg) |![plt_0411-1200-1](assets/plt_0411-1200-1-200x215.jpg) |![plt_0411-1207](assets/plt_0411-1207-200x215.jpg) | | plt_0411-1116 |plt_0411-1155 |plt_0411-1200-1 |plt_0411-1207 | | ![plt_0411-1210](assets/plt_0411-1210-200x215.jpg) |![plt_0411-1211](assets/plt_0411-1211-200x215.jpg) |![plt_0411-1232-1](assets/plt_0411-1232-1-200x215.jpg) |![prk_0328-0945](assets/prk_0328-0945-200x215.jpg) | | plt_0411-1210 |plt_0411-1211 |plt_0411-1232-1 |prk_0328-0945 | | ![prk_0328-1025](assets/prk_0328-1025-200x215.jpg) |![prk_0328-1031](assets/prk_0328-1031-200x215.jpg) |![prk_0328-1034](assets/prk_0328-1034-200x215.jpg) |![prk_0328-1037](assets/prk_0328-1037-200x215.jpg) | | prk_0328-1025 |prk_0328-1031 |prk_0328-1034 |prk_0328-1037 | | ![prk_0328-1045](assets/prk_0328-1045-200x215.jpg) |![rsh2_0406-1505](assets/rsh2_0406-1505-200x215.jpg) |![rsh_0406-1343](assets/rsh_0406-1343-200x215.jpg) |![rsh_0406-1356](assets/rsh_0406-1356-200x215.jpg) | | prk_0328-1045 |rsh2_0406-1505 |rsh_0406-1343 |rsh_0406-1356 | | ![rsh_0406-1413](assets/rsh_0406-1413-200x215.jpg) |![rsh_0406-1427](assets/rsh_0406-1427-200x215.jpg) |![rsh_0406-1441-1](assets/rsh_0406-1441-1-200x215.jpg) |![rsh_0406-1443](assets/rsh_0406-1443-200x215.jpg) | | rsh_0406-1413 |rsh_0406-1427 |rsh_0406-1441-1 |rsh_0406-1443 | | ![sami_0331-1019](assets/sami_0331-1019-200x215.jpg) |![sat_0406-1107](assets/sat_0406-1107-200x215.jpg) |![sat_0406-1129](assets/sat_0406-1129-200x215.jpg) |![sat_0406-1130](assets/sat_0406-1130-200x215.jpg) | | sami_0331-1019 |sat_0406-1107 |sat_0406-1129 |sat_0406-1130 | | ![sat_0406-1157-1](assets/sat_0406-1157-1-200x215.jpg) |![selfie_0822-0906](assets/selfie_0822-0906-200x215.jpg) |![strt_0331-1027](assets/strt_0331-1027-200x215.jpg) |![tree_0822-0853](assets/tree_0822-0853-200x215.jpg) | | sat_0406-1157-1 |selfie_0822-0906 |strt_0331-1027 |tree_0822-0853 | | ![ulm_0328-1118](assets/ulm_0328-1118-200x215.jpg) | | | | | ulm_0328-1118 | | | | # Credits Dataset originally collected by ICVL from the webpage: https://icvl.cs.bgu.ac.il/hyperspectral/ For questions, comments and technical assistance, please contact [[email protected]](mailto:[email protected]) When used, fully or partially, please cite: ``` Arad and Ben-Shahar, Sparse Recovery of Hyperspectral Signal from Natural RGB Images, in the European Conference on Computer Vision, Amsterdam, The Netherlands, October 11–14, 2016 ``` Bibtex: ``` @inproceedings{arad_and_ben_shahar_2016_ECCV, title={Sparse Recovery of Hyperspectral Signal from Natural RGB Images}, author={Arad, Boaz and Ben-Shahar, Ohad}, booktitle={European Conference on Computer Vision}, pages={19--34}, year={2016}, organization={Springer} } ```
danaroth/icvl
[ "license:unknown", "region:us" ]
2023-11-15T14:29:47+00:00
{"license": "unknown"}
2023-11-21T10:01:01+00:00
[]
[]
TAGS #license-unknown #region-us
Description =========== ICVL is a hyperspectral image dataset, collected by "Sparse Recovery of Hyperspectral Signal from Natural RGB Images" The database images were acquired using a Specim PS Kappa DX4 hyperspectral camera and a rotary stage for spatial scanning. At this time it contains 200 images and will continue to grow progressively. Images were collected at 1392 $\times$ 1300 spatial resolution over 519 spectral bands (400-1,000nm at roughly 1.25nm increments). The .raw files contain raw out-of-camera data in ENVI format and .hdr files contain the headers required to decode them. For your convenience, .mat files are provided, downsampled to 31 spectral channels from 400nm to 700nm at 10nm increments. The original dataset only contains clean images. For hyperspectral image denoising benchmarks, the testing datasets come from "3D Quasi-Recurrent Neural Network for Hyperspectral Image Denoising". Quick look ========== Credits ======= Dataset originally collected by ICVL from the webpage: URL For questions, comments and technical assistance, please contact iCVL@URL When used, fully or partially, please cite: Bibtex:
[]
[ "TAGS\n#license-unknown #region-us \n" ]
[ 13 ]
[ "passage: TAGS\n#license-unknown #region-us \n" ]
198ccd3fd78d6221b45b0abbb219c2b2196852e0
# Dataset Card for "lisa-formal-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
PAug/lisa-formal-v1
[ "region:us" ]
2023-11-15T14:49:03+00:00
{"dataset_info": {"features": [{"name": "directory", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "file_type", "dtype": "string"}, {"name": "raw_content", "dtype": "string"}, {"name": "parsed_content", "list": [{"name": "name", "dtype": "string"}, {"name": "kind", "dtype": {"class_label": {"names": {"0": "Axiom", "1": "Lemma", "2": "PredicateDefinition", "3": "FunctionDefinition", "4": "Theorem"}}}}, {"name": "line", "dtype": "int64"}, {"name": "statement", "dtype": "string"}, {"name": "declaration", "dtype": "string"}, {"name": "definitions", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "imports", "list": [{"name": "kind", "dtype": {"class_label": {"names": {"0": "Axiom", "1": "Lemma", "2": "PredicateDefinition", "3": "FunctionDefinition", "4": "Theorem"}}}}, {"name": "name", "dtype": "string"}, {"name": "statement", "dtype": "string"}]}, {"name": "proofsteps", "sequence": "string"}, {"name": "code", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4662436, "num_examples": 92}, {"name": "val", "num_bytes": 255599, "num_examples": 4}], "download_size": 878593, "dataset_size": 4918035}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}]}
2023-12-01T17:15:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "lisa-formal-v1" More Information needed
[ "# Dataset Card for \"lisa-formal-v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"lisa-formal-v1\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"lisa-formal-v1\"\n\nMore Information needed" ]
dda05be6870c784a088ce82e29c9cfd18df66d26
# Dataset Card for "20231012_chai_prize_reward_model_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ChaiML/20231012_chai_prize_reward_model_data
[ "region:us" ]
2023-11-15T14:52:54+00:00
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 73906677, "num_examples": 49553}], "download_size": 42346275, "dataset_size": 73906677}}
2023-11-16T16:33:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "20231012_chai_prize_reward_model_data" More Information needed
[ "# Dataset Card for \"20231012_chai_prize_reward_model_data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"20231012_chai_prize_reward_model_data\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"20231012_chai_prize_reward_model_data\"\n\nMore Information needed" ]
029d9711392b8942449bf12e10e12d07f5f3230c
# Dataset Card for "ultrafeedback-prompts-judgelm-gpt35-with-principles" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gabrielmbmb/ultrafeedback-prompts-judgelm-gpt35-with-principles
[ "region:us" ]
2023-11-15T14:54:47+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "generation_model", "dtype": "string"}, {"name": "generation_prompt", "dtype": "string"}, {"name": "raw_generation_responses", "sequence": "string"}, {"name": "generations", "sequence": "string"}, {"name": "labelling_model", "dtype": "string"}, {"name": "labelling_prompt", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "raw_labelling_response", "dtype": "string"}, {"name": "ratings", "sequence": "int64"}, {"name": "rationale", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14351059, "num_examples": 1000}], "download_size": 6587567, "dataset_size": 14351059}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T14:54:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrafeedback-prompts-judgelm-gpt35-with-principles" More Information needed
[ "# Dataset Card for \"ultrafeedback-prompts-judgelm-gpt35-with-principles\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrafeedback-prompts-judgelm-gpt35-with-principles\"\n\nMore Information needed" ]
[ 6, 32 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ultrafeedback-prompts-judgelm-gpt35-with-principles\"\n\nMore Information needed" ]
20b06af9e99b789a7e99108ed7a4da87fccb0ff6
# Dataset Card for "claude_multiround_chat_1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TinyPixel/claude_multiround_chat_1k
[ "region:us" ]
2023-11-15T15:03:54+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17754888, "num_examples": 1609}], "download_size": 9514689, "dataset_size": 17754888}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-15T04:38:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "claude_multiround_chat_1k" More Information needed
[ "# Dataset Card for \"claude_multiround_chat_1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"claude_multiround_chat_1k\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"claude_multiround_chat_1k\"\n\nMore Information needed" ]
2767425e33044b8bcd11124321d0347e8681c0b9
# Dataset Card for "three_shot_comparison" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
harpreetsahota/three_shot_comparison
[ "region:us" ]
2023-11-15T15:09:19+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "decilm_generation", "dtype": "string"}, {"name": "mistral_generation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66884, "num_examples": 30}], "download_size": 52879, "dataset_size": 66884}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-11T16:31:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "three_shot_comparison" More Information needed
[ "# Dataset Card for \"three_shot_comparison\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"three_shot_comparison\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"three_shot_comparison\"\n\nMore Information needed" ]
4d9b28d42623784d3abb5f45142bf5669f05475d
# Dataset Card for "oct-object-detection-v2-merge" Dataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. Changes from from v1 are images are grouped into a single row for the same class detection object, and also join with merge method overlapping boxes. merge means, get the whole area covered by both boxes. [Source datataset](https://doi.org/10.1101/2023.03.29.534704)
joseluhf11/oct-object-detection-v2-merge
[ "region:us" ]
2023-11-15T15:13:12+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "int64"}}, {"name": "categories", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 153967507.25, "num_examples": 1246}], "download_size": 71637288, "dataset_size": 153967507.25}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-22T08:42:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "oct-object-detection-v2-merge" Dataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. Changes from from v1 are images are grouped into a single row for the same class detection object, and also join with merge method overlapping boxes. merge means, get the whole area covered by both boxes. Source datataset
[ "# Dataset Card for \"oct-object-detection-v2-merge\"\nDataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nChanges from from v1 are images are grouped into a single row for the same class detection object, and also join with merge method overlapping boxes. merge means, get the whole area covered by both boxes.\nSource datataset" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"oct-object-detection-v2-merge\"\nDataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nChanges from from v1 are images are grouped into a single row for the same class detection object, and also join with merge method overlapping boxes. merge means, get the whole area covered by both boxes.\nSource datataset" ]
[ 6, 121 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"oct-object-detection-v2-merge\"\nDataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nChanges from from v1 are images are grouped into a single row for the same class detection object, and also join with merge method overlapping boxes. merge means, get the whole area covered by both boxes.\nSource datataset" ]
2629d5ae7bfabeca2f41d32791771dfa6ec7da68
# Dataset Card for "oct-object-detection-v2-average" Dataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. Changes from from v1 are images are grouped into a single row for the same class detection object, and also join with average method overlapping boxes. Average means, average of Xs and Ys. [Source datataset](https://doi.org/10.1101/2023.03.29.534704)
joseluhf11/oct-object-detection-v2-average
[ "region:us" ]
2023-11-15T15:13:28+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "categories", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 153967507.25, "num_examples": 1246}], "download_size": 71639902, "dataset_size": 153967507.25}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-22T08:41:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "oct-object-detection-v2-average" Dataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. Changes from from v1 are images are grouped into a single row for the same class detection object, and also join with average method overlapping boxes. Average means, average of Xs and Ys. Source datataset
[ "# Dataset Card for \"oct-object-detection-v2-average\"\nDataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nChanges from from v1 are images are grouped into a single row for the same class detection object, and also join with average method overlapping boxes. Average means, average of Xs and Ys.\nSource datataset" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"oct-object-detection-v2-average\"\nDataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nChanges from from v1 are images are grouped into a single row for the same class detection object, and also join with average method overlapping boxes. Average means, average of Xs and Ys.\nSource datataset" ]
[ 6, 120 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"oct-object-detection-v2-average\"\nDataset is composed of images with multiples object detection box in coco format (x,y,w,h). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease. \nChanges from from v1 are images are grouped into a single row for the same class detection object, and also join with average method overlapping boxes. Average means, average of Xs and Ys.\nSource datataset" ]
0c560fe73aac68eabe33ef00a49dfd1f4e55d2d2
# Dataset Card for "bulgari-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rjaiswal/bulgari-dataset
[ "region:us" ]
2023-11-15T15:25:05+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "null"}, {"name": "text", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T16:29:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bulgari-dataset" More Information needed
[ "# Dataset Card for \"bulgari-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bulgari-dataset\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bulgari-dataset\"\n\nMore Information needed" ]
ece51a478c168f8ffce60dbfea1b3ac476ebaced
# Dataset Card for Universal NER Upcoming! arXiv: https://huggingface.co/papers/2311.09122
universalner/universal_ner
[ "task_categories:token-classification", "language:ceb", "language:da", "language:de", "language:en", "language:hr", "language:pt", "language:ru", "language:sk", "language:sr", "language:sv", "language:tl", "language:zh", "license:cc-by-sa-4.0", "arxiv:2311.09122", "region:us" ]
2023-11-15T15:26:34+00:00
{"language": ["ceb", "da", "de", "en", "hr", "pt", "ru", "sk", "sr", "sv", "tl", "zh"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "ceb_gja", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 39540, "num_examples": 188}], "download_size": 30395, "dataset_size": 39540}, {"config_name": "da_ddt", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2304027, "num_examples": 4383}, {"name": "validation", "num_bytes": 293562, "num_examples": 564}, {"name": "test", "num_bytes": 285813, "num_examples": 565}], "download_size": 2412623, "dataset_size": 2883402}, {"config_name": "de_pud", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 641819, "num_examples": 1000}], "download_size": 501924, "dataset_size": 641819}, {"config_name": "en_ewt", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 6133506, "num_examples": 12543}, {"name": "validation", "num_bytes": 782835, "num_examples": 2001}, {"name": "test", "num_bytes": 785361, "num_examples": 2077}], "download_size": 5962747, "dataset_size": 7701702}, {"config_name": "en_pud", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 600666, "num_examples": 1000}], "download_size": 462120, "dataset_size": 600666}, {"config_name": "hr_set", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 4523323, "num_examples": 6914}, {"name": "validation", "num_bytes": 656738, "num_examples": 960}, {"name": "test", "num_bytes": 719703, "num_examples": 1136}], "download_size": 4620262, "dataset_size": 5899764}, {"config_name": "pt_bosque", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 4839200, "num_examples": 7018}, {"name": "validation", "num_bytes": 802880, "num_examples": 1172}, {"name": "test", "num_bytes": 780768, "num_examples": 1167}], "download_size": 4867264, "dataset_size": 6422848}, {"config_name": "pt_pud", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 661453, "num_examples": 1000}], "download_size": 507495, "dataset_size": 661453}, {"config_name": "ru_pud", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 795294, "num_examples": 1000}], "download_size": 669214, "dataset_size": 795294}, {"config_name": "sk_snk", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2523121, "num_examples": 8483}, {"name": "validation", "num_bytes": 409448, "num_examples": 1060}, {"name": "test", "num_bytes": 411686, "num_examples": 1061}], "download_size": 2597877, "dataset_size": 3344255}, {"config_name": "sr_set", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2174631, "num_examples": 3328}, {"name": "validation", "num_bytes": 349276, "num_examples": 536}, {"name": "test", "num_bytes": 336065, "num_examples": 520}], "download_size": 2248325, "dataset_size": 2859972}, {"config_name": "sv_pud", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 588564, "num_examples": 1000}], "download_size": 464252, "dataset_size": 588564}, {"config_name": "sv_talbanken", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2027488, "num_examples": 4303}, {"name": "validation", "num_bytes": 291774, "num_examples": 504}, {"name": "test", "num_bytes": 615209, "num_examples": 1219}], "download_size": 2239432, "dataset_size": 2934471}, {"config_name": "tl_trg", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 23671, "num_examples": 128}], "download_size": 18546, "dataset_size": 23671}, {"config_name": "tl_ugnayan", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 31732, "num_examples": 94}], "download_size": 23941, "dataset_size": 31732}, {"config_name": "zh_gsd", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2747999, "num_examples": 3997}, {"name": "validation", "num_bytes": 355515, "num_examples": 500}, {"name": "test", "num_bytes": 335893, "num_examples": 500}], "download_size": 2614866, "dataset_size": 3439407}, {"config_name": "zh_gsdsimp", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2747863, "num_examples": 3997}, {"name": "validation", "num_bytes": 352423, "num_examples": 500}, {"name": "test", "num_bytes": 335869, "num_examples": 500}], "download_size": 2611290, "dataset_size": 3436155}, {"config_name": "zh_pud", "features": [{"name": "idx", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "annotator", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 607418, "num_examples": 1000}], "download_size": 460357, "dataset_size": 607418}]}
2023-11-16T11:52:04+00:00
[ "2311.09122" ]
[ "ceb", "da", "de", "en", "hr", "pt", "ru", "sk", "sr", "sv", "tl", "zh" ]
TAGS #task_categories-token-classification #language-Cebuano #language-Danish #language-German #language-English #language-Croatian #language-Portuguese #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tagalog #language-Chinese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
# Dataset Card for Universal NER Upcoming! arXiv: URL
[ "# Dataset Card for Universal NER\n\nUpcoming!\n\narXiv: URL" ]
[ "TAGS\n#task_categories-token-classification #language-Cebuano #language-Danish #language-German #language-English #language-Croatian #language-Portuguese #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tagalog #language-Chinese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n", "# Dataset Card for Universal NER\n\nUpcoming!\n\narXiv: URL" ]
[ 103, 17 ]
[ "passage: TAGS\n#task_categories-token-classification #language-Cebuano #language-Danish #language-German #language-English #language-Croatian #language-Portuguese #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tagalog #language-Chinese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER\n\nUpcoming!\n\narXiv: URL" ]
cad46af93ffde3ec1a20a4f4b1412e19b8972d16
To run generation code within 'mathematics_dataset\mathematics_dataset\': - Activate python venv ``` .\.venv\Scripts\activate ``` - Requirements defined in requires.txt - Run ```python generate_to_file.py --output_dir ds``` to generate dataset to directory \ds Had to change enconding when opening files to utf-8 so that some characters are allowed (ã õ é) To obtain dataset with the correct amount of rows: - python ```generate_to_file.py --output_dir ds --per_train_module 1999998 --per_test_module 10000``` This dataset creates train set (train-easy,train-medium,train-hard) and the creates extrapolation ("measure generalization along various axes of difficulty to beyond that seen during training") and interpolation ("test questions are distinct from the train questions") tests. On HugginFace only interpolate tests are used as test set. Some tests will not work, since they rely on english terms.
liaad/math_dataset_portuguese
[ "license:mit", "region:us" ]
2023-11-15T15:28:26+00:00
{"license": "mit", "dataset_info": [{"config_name": "algebra__linear_1d", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 538, "num_examples": 9}, {"name": "test", "num_bytes": 657, "num_examples": 10}], "download_size": 98096, "dataset_size": 1195}, {"config_name": "algebra__linear_1d_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1270, "num_examples": 9}, {"name": "test", "num_bytes": 1332, "num_examples": 10}], "download_size": 98096, "dataset_size": 2602}, {"config_name": "algebra__linear_2d", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 698, "num_examples": 9}, {"name": "test", "num_bytes": 787, "num_examples": 10}], "download_size": 98096, "dataset_size": 1485}, {"config_name": "algebra__linear_2d_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1312, "num_examples": 9}, {"name": "test", "num_bytes": 1286, "num_examples": 10}], "download_size": 98096, "dataset_size": 2598}, {"config_name": "algebra__polynomial_roots", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 861, "num_examples": 9}, {"name": "test", "num_bytes": 1005, "num_examples": 10}], "download_size": 98096, "dataset_size": 1866}, {"config_name": "algebra__polynomial_roots_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1392, "num_examples": 9}, {"name": "test", "num_bytes": 1623, "num_examples": 10}], "download_size": 98096, "dataset_size": 3015}, {"config_name": "algebra__sequence_next_term", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 702, "num_examples": 9}, {"name": "test", "num_bytes": 960, "num_examples": 10}], "download_size": 98096, "dataset_size": 1662}, {"config_name": "algebra__sequence_nth_term", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1144, "num_examples": 9}, {"name": "test", "num_bytes": 1361, "num_examples": 10}], "download_size": 98096, "dataset_size": 2505}, {"config_name": "arithmetic__add_or_sub", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 541, "num_examples": 9}, {"name": "test", "num_bytes": 623, "num_examples": 10}], "download_size": 98096, "dataset_size": 1164}, {"config_name": "arithmetic__add_or_sub_in_base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 560, "num_examples": 9}, {"name": "test", "num_bytes": 656, "num_examples": 10}], "download_size": 98096, "dataset_size": 1216}, {"config_name": "arithmetic__add_sub_multiple", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 475, "num_examples": 9}, {"name": "test", "num_bytes": 580, "num_examples": 10}], "download_size": 98096, "dataset_size": 1055}, {"config_name": "arithmetic__div", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 460, "num_examples": 9}, {"name": "test", "num_bytes": 526, "num_examples": 10}], "download_size": 98096, "dataset_size": 986}, {"config_name": "arithmetic__mixed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 497, "num_examples": 9}, {"name": "test", "num_bytes": 647, "num_examples": 10}], "download_size": 98096, "dataset_size": 1144}, {"config_name": "arithmetic__mul", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 413, "num_examples": 9}, {"name": "test", "num_bytes": 481, "num_examples": 10}], "download_size": 98096, "dataset_size": 894}, {"config_name": "arithmetic__mul_div_multiple", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 524, "num_examples": 9}, {"name": "test", "num_bytes": 643, "num_examples": 10}], "download_size": 98096, "dataset_size": 1167}, {"config_name": "arithmetic__nearest_integer_root", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 991, "num_examples": 9}, {"name": "test", "num_bytes": 1132, "num_examples": 10}], "download_size": 98096, "dataset_size": 2123}, {"config_name": "arithmetic__simplify_surd", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1016, "num_examples": 9}, {"name": "test", "num_bytes": 1201, "num_examples": 10}], "download_size": 98096, "dataset_size": 2217}, {"config_name": "calculus__differentiate", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1157, "num_examples": 9}, {"name": "test", "num_bytes": 1212, "num_examples": 10}], "download_size": 98096, "dataset_size": 2369}, {"config_name": "calculus__differentiate_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1443, "num_examples": 9}, {"name": "test", "num_bytes": 1743, "num_examples": 10}], "download_size": 98096, "dataset_size": 3186}, {"config_name": "comparison__closest", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 768, "num_examples": 9}, {"name": "test", "num_bytes": 930, "num_examples": 10}], "download_size": 98096, "dataset_size": 1698}, {"config_name": "comparison__closest_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1309, "num_examples": 9}, {"name": "test", "num_bytes": 1418, "num_examples": 10}], "download_size": 98096, "dataset_size": 2727}, {"config_name": "comparison__kth_biggest", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 741, "num_examples": 9}, {"name": "test", "num_bytes": 945, "num_examples": 10}], "download_size": 98096, "dataset_size": 1686}, {"config_name": "comparison__kth_biggest_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1084, "num_examples": 9}, {"name": "test", "num_bytes": 1382, "num_examples": 10}], "download_size": 98096, "dataset_size": 2466}, {"config_name": "comparison__pair", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 533, "num_examples": 9}, {"name": "test", "num_bytes": 654, "num_examples": 10}], "download_size": 98096, "dataset_size": 1187}, {"config_name": "comparison__pair_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1097, "num_examples": 9}, {"name": "test", "num_bytes": 1365, "num_examples": 10}], "download_size": 98096, "dataset_size": 2462}, {"config_name": "comparison__sort", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 673, "num_examples": 9}, {"name": "test", "num_bytes": 895, "num_examples": 10}], "download_size": 98096, "dataset_size": 1568}, {"config_name": "comparison__sort_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1142, "num_examples": 9}, {"name": "test", "num_bytes": 1075, "num_examples": 10}], "download_size": 98096, "dataset_size": 2217}, {"config_name": "measurement__conversion", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 710, "num_examples": 9}, {"name": "test", "num_bytes": 756, "num_examples": 10}], "download_size": 98096, "dataset_size": 1466}, {"config_name": "measurement__time", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 627, "num_examples": 9}, {"name": "test", "num_bytes": 701, "num_examples": 10}], "download_size": 98096, "dataset_size": 1328}, {"config_name": "numbers__base_conversion", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 557, "num_examples": 9}, {"name": "test", "num_bytes": 642, "num_examples": 10}], "download_size": 98096, "dataset_size": 1199}, {"config_name": "numbers__div_remainder", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 652, "num_examples": 9}, {"name": "test", "num_bytes": 757, "num_examples": 10}], "download_size": 98096, "dataset_size": 1409}, {"config_name": "numbers__div_remainder_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1081, "num_examples": 9}, {"name": "test", "num_bytes": 1358, "num_examples": 10}], "download_size": 98096, "dataset_size": 2439}, {"config_name": "numbers__gcd", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 610, "num_examples": 9}, {"name": "test", "num_bytes": 700, "num_examples": 10}], "download_size": 98096, "dataset_size": 1310}, {"config_name": "numbers__gcd_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1239, "num_examples": 9}, {"name": "test", "num_bytes": 1567, "num_examples": 10}], "download_size": 98096, "dataset_size": 2806}, {"config_name": "numbers__is_factor", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 544, "num_examples": 9}, {"name": "test", "num_bytes": 607, "num_examples": 10}], "download_size": 98096, "dataset_size": 1151}, {"config_name": "numbers__is_factor_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1073, "num_examples": 9}, {"name": "test", "num_bytes": 1198, "num_examples": 10}], "download_size": 98096, "dataset_size": 2271}, {"config_name": "numbers__is_prime", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 570, "num_examples": 9}, {"name": "test", "num_bytes": 648, "num_examples": 10}], "download_size": 98096, "dataset_size": 1218}, {"config_name": "numbers__is_prime_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1109, "num_examples": 9}, {"name": "test", "num_bytes": 1452, "num_examples": 10}], "download_size": 98096, "dataset_size": 2561}, {"config_name": "numbers__lcm", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 697, "num_examples": 9}, {"name": "test", "num_bytes": 807, "num_examples": 10}], "download_size": 98096, "dataset_size": 1504}, {"config_name": "numbers__lcm_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1382, "num_examples": 9}, {"name": "test", "num_bytes": 1163, "num_examples": 10}], "download_size": 98096, "dataset_size": 2545}, {"config_name": "numbers__list_prime_factors", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 602, "num_examples": 9}, {"name": "test", "num_bytes": 715, "num_examples": 10}], "download_size": 98096, "dataset_size": 1317}, {"config_name": "numbers__list_prime_factors_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1319, "num_examples": 9}, {"name": "test", "num_bytes": 1220, "num_examples": 10}], "download_size": 98096, "dataset_size": 2539}, {"config_name": "numbers__place_value", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 662, "num_examples": 9}, {"name": "test", "num_bytes": 780, "num_examples": 10}], "download_size": 98096, "dataset_size": 1442}, {"config_name": "numbers__place_value_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1219, "num_examples": 9}, {"name": "test", "num_bytes": 1371, "num_examples": 10}], "download_size": 98096, "dataset_size": 2590}, {"config_name": "numbers__round_number", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 797, "num_examples": 9}, {"name": "test", "num_bytes": 844, "num_examples": 10}], "download_size": 98096, "dataset_size": 1641}, {"config_name": "numbers__round_number_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1158, "num_examples": 9}, {"name": "test", "num_bytes": 1377, "num_examples": 10}], "download_size": 98096, "dataset_size": 2535}, {"config_name": "polynomials__add", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1419, "num_examples": 9}, {"name": "test", "num_bytes": 1540, "num_examples": 10}], "download_size": 98096, "dataset_size": 2959}, {"config_name": "polynomials__coefficient_named", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1163, "num_examples": 9}, {"name": "test", "num_bytes": 1441, "num_examples": 10}], "download_size": 98096, "dataset_size": 2604}, {"config_name": "polynomials__collect", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 742, "num_examples": 9}, {"name": "test", "num_bytes": 917, "num_examples": 10}], "download_size": 98096, "dataset_size": 1659}, {"config_name": "polynomials__compose", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1275, "num_examples": 9}, {"name": "test", "num_bytes": 1314, "num_examples": 10}], "download_size": 98096, "dataset_size": 2589}, {"config_name": "polynomials__evaluate", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 625, "num_examples": 9}, {"name": "test", "num_bytes": 706, "num_examples": 10}], "download_size": 98096, "dataset_size": 1331}, {"config_name": "polynomials__evaluate_composed", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1193, "num_examples": 9}, {"name": "test", "num_bytes": 1492, "num_examples": 10}], "download_size": 98096, "dataset_size": 2685}, {"config_name": "polynomials__expand", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 913, "num_examples": 9}, {"name": "test", "num_bytes": 1014, "num_examples": 10}], "download_size": 98096, "dataset_size": 1927}, {"config_name": "polynomials__simplify_power", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1155, "num_examples": 9}, {"name": "test", "num_bytes": 1481, "num_examples": 10}], "download_size": 98096, "dataset_size": 2636}, {"config_name": "probability__swr_p_level_set", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1541, "num_examples": 9}, {"name": "test", "num_bytes": 1766, "num_examples": 10}], "download_size": 98096, "dataset_size": 3307}, {"config_name": "probability__swr_p_sequence", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1619, "num_examples": 9}, {"name": "test", "num_bytes": 1781, "num_examples": 10}], "download_size": 98096, "dataset_size": 3400}]}
2023-12-15T19:12:32+00:00
[]
[]
TAGS #license-mit #region-us
To run generation code within 'mathematics_dataset\mathematics_dataset\': - Activate python venv - Requirements defined in URL - Run to generate dataset to directory \ds Had to change enconding when opening files to utf-8 so that some characters are allowed (ã õ é) To obtain dataset with the correct amount of rows: - python This dataset creates train set (train-easy,train-medium,train-hard) and the creates extrapolation ("measure generalization along various axes of difficulty to beyond that seen during training") and interpolation ("test questions are distinct from the train questions") tests. On HugginFace only interpolate tests are used as test set. Some tests will not work, since they rely on english terms.
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
6d4e00a231a51711f5bfb03becdc9bac14b23a43
# Dataset Card for "five_shot_comparison" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
harpreetsahota/five_shot_comparison
[ "region:us" ]
2023-11-15T15:37:00+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "decilm_generation", "dtype": "string"}, {"name": "mistral_generation", "dtype": "string"}, {"name": "falcon_generation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120687, "num_examples": 30}], "download_size": 60550, "dataset_size": 120687}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T15:37:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "five_shot_comparison" More Information needed
[ "# Dataset Card for \"five_shot_comparison\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"five_shot_comparison\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"five_shot_comparison\"\n\nMore Information needed" ]
5a4ccf1af9a58945cdf8553ee11c5738445edcae
# Dataset Card for "qasimple" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zaanind/qasimple
[ "region:us" ]
2023-11-15T15:46:24+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18378, "num_examples": 25}], "download_size": 16549, "dataset_size": 18378}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T15:46:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "qasimple" More Information needed
[ "# Dataset Card for \"qasimple\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"qasimple\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"qasimple\"\n\nMore Information needed" ]
0cec41593140f3c68a521f24a273a6246e2302c2
## Dataset for the competition This is the official dataset for the competition ["Find the Trojan: Universal Backdoor Detection in Aligned LLMs"](https://github.com/ethz-spylab/rlhf_trojan_competition) hosted at SaTML 2024. The dataset contains two splits: `train` and `test`. Participants should use the `train` split to execute their proposed methods and can use the `test` as a measure of how successful their search was. Competition prizes will be awarded based on performance on a **private test set**. See the [official competition website](https://github.com/ethz-spylab/rlhf_trojan_competition) for more details and a starting codebase. Competition organized by the [SPY Lab](https://spylab.ai) at ETH Zurich. This dataset is created from a split of [this Anthropic dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf).
ethz-spylab/rlhf_trojan_dataset
[ "language:en", "region:us" ]
2023-11-15T15:58:19+00:00
{"language": ["en"], "extra_gated_prompt": "You acknowledge that generations from this model can be harmful, and that you will not use them beyond this competition. You agree not to use the model to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"I agree to use this model ONLY within the competition": "checkbox"}}
2023-11-20T08:34:04+00:00
[]
[ "en" ]
TAGS #language-English #region-us
## Dataset for the competition This is the official dataset for the competition "Find the Trojan: Universal Backdoor Detection in Aligned LLMs" hosted at SaTML 2024. The dataset contains two splits: 'train' and 'test'. Participants should use the 'train' split to execute their proposed methods and can use the 'test' as a measure of how successful their search was. Competition prizes will be awarded based on performance on a private test set. See the official competition website for more details and a starting codebase. Competition organized by the SPY Lab at ETH Zurich. This dataset is created from a split of this Anthropic dataset.
[ "## Dataset for the competition\n\nThis is the official dataset for the competition \"Find the Trojan: Universal Backdoor Detection in Aligned LLMs\" hosted at SaTML 2024.\n\nThe dataset contains two splits: 'train' and 'test'. Participants should use the 'train' split to execute their proposed methods and can use the 'test' as a measure of how successful their search was. Competition prizes will be awarded based on performance on a private test set.\n\nSee the official competition website for more details and a starting codebase.\n\nCompetition organized by the SPY Lab at ETH Zurich.\n\nThis dataset is created from a split of this Anthropic dataset." ]
[ "TAGS\n#language-English #region-us \n", "## Dataset for the competition\n\nThis is the official dataset for the competition \"Find the Trojan: Universal Backdoor Detection in Aligned LLMs\" hosted at SaTML 2024.\n\nThe dataset contains two splits: 'train' and 'test'. Participants should use the 'train' split to execute their proposed methods and can use the 'test' as a measure of how successful their search was. Competition prizes will be awarded based on performance on a private test set.\n\nSee the official competition website for more details and a starting codebase.\n\nCompetition organized by the SPY Lab at ETH Zurich.\n\nThis dataset is created from a split of this Anthropic dataset." ]
[ 10, 154 ]
[ "passage: TAGS\n#language-English #region-us \n## Dataset for the competition\n\nThis is the official dataset for the competition \"Find the Trojan: Universal Backdoor Detection in Aligned LLMs\" hosted at SaTML 2024.\n\nThe dataset contains two splits: 'train' and 'test'. Participants should use the 'train' split to execute their proposed methods and can use the 'test' as a measure of how successful their search was. Competition prizes will be awarded based on performance on a private test set.\n\nSee the official competition website for more details and a starting codebase.\n\nCompetition organized by the SPY Lab at ETH Zurich.\n\nThis dataset is created from a split of this Anthropic dataset." ]
53410e98a4b71cd706756d660a7511129d7c743f
<p align="center"><h1>🧠 Awesome ChatGPT Prompts [CSV dataset]</h1></p> This is a Dataset Repository of **Awesome ChatGPT Prompts** **[View All Prompts on GitHub](https://github.com/f/awesome-chatgpt-prompts)** # License CC-0
MustafaSuleyman/bypasswaf
[ "license:cc0-1.0", "ChatGPT", "region:us" ]
2023-11-15T16:16:40+00:00
{"license": "cc0-1.0", "tags": ["ChatGPT"]}
2023-11-15T16:35:34+00:00
[]
[]
TAGS #license-cc0-1.0 #ChatGPT #region-us
<p align="center"><h1> Awesome ChatGPT Prompts [CSV dataset]</h1></p> This is a Dataset Repository of Awesome ChatGPT Prompts View All Prompts on GitHub # License CC-0
[ "# License\n\nCC-0" ]
[ "TAGS\n#license-cc0-1.0 #ChatGPT #region-us \n", "# License\n\nCC-0" ]
[ 18, 4 ]
[ "passage: TAGS\n#license-cc0-1.0 #ChatGPT #region-us \n# License\n\nCC-0" ]
3d8aa0dedc9846fcb26e492ffdbd1f5606c0d465
# Dataset Card for "patent_v3.1_vnhsge" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nguyenthanhdo/patent_v3.1_vnhsge
[ "region:us" ]
2023-11-15T16:18:10+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 144358866, "num_examples": 105199}], "download_size": 82786635, "dataset_size": 144358866}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T16:18:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "patent_v3.1_vnhsge" More Information needed
[ "# Dataset Card for \"patent_v3.1_vnhsge\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"patent_v3.1_vnhsge\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"patent_v3.1_vnhsge\"\n\nMore Information needed" ]
47e53666e2e9bf41ccde2b9b8b68b2daed9733ff
# Dataset Card for "codesearchnet-queries" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jinaai/codesearchnet-queries
[ "region:us" ]
2023-11-15T16:18:20+00:00
{"dataset_info": {"features": [{"name": "code", "dtype": "string"}, {"name": "docs", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 87951858, "num_examples": 92561}], "download_size": 0, "dataset_size": 87951858}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2023-11-15T16:19:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "codesearchnet-queries" More Information needed
[ "# Dataset Card for \"codesearchnet-queries\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"codesearchnet-queries\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"codesearchnet-queries\"\n\nMore Information needed" ]
a74688a343f8d2bee4f9080895caf99859c190c0
# Dataset Card for "slim_pajama_chunk_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rokset3/slim_pajama_chunk_1
[ "region:us" ]
2023-11-15T16:50:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 259046000019, "num_examples": 58996336}], "download_size": 75400075203, "dataset_size": 259046000019}}
2023-11-15T18:39:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "slim_pajama_chunk_1" More Information needed
[ "# Dataset Card for \"slim_pajama_chunk_1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"slim_pajama_chunk_1\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"slim_pajama_chunk_1\"\n\nMore Information needed" ]
7db5bd2108e24ee8458b9e1c7da86fb69da616cc
# Dataset Card for Bengali Chat We know that current English-first LLMs don’t work well for many other languages, both in terms of performance, latency, and speed. Building instruction datasets for non-English languages is an important challenge that needs to be solved. Dedicated towards addressing this problem, I release 2 new datasets [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) & [rishiraj/hindichat](https://huggingface.co/datasets/rishiraj/hindichat/) of 10,000 instructions and demonstrations each. This data can be used for supervised fine-tuning (SFT) to make language multilingual models follow instructions better. ### Dataset Summary [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is translated from [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots/) which comprised mostly of single-turn instructions across the following categories: | Category | Count | |:-----------|--------:| | Generation | 4560 | | Open QA | 1240 | | Brainstorm | 1120 | | Chat | 850 | | Rewrite | 660 | | Summarize | 420 | | Coding | 350 | | Classify | 350 | | Closed QA | 260 | | Extract | 190 | ### Languages The data in [rishiraj/bengalichat](https://huggingface.co/datasets/rishiraj/bengalichat/) are in Bengali (BCP-47 bn). ### Data Fields The data fields are as follows: * `prompt`: Describes the task the model should perform. * `prompt_id`: A unique ID for the prompt. * `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content. * `category`: Which category the example belongs to (e.g. `Chat` or `Coding`). * `text`: Content of `messages` in a format that is compatible with dataset_text_field of SFTTrainer. ### Data Splits | | train_sft | test_sft | |---------------|------:| ---: | | bengalichat | 9500 | 500 | ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{bengalichat, author = {Rishiraj Acharya}, title = {Bengali Chat}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/datasets/rishiraj/bengalichat}} } ```
rishiraj/bengalichat
[ "task_categories:conversational", "task_categories:text-generation", "language:bn", "license:cc-by-nc-4.0", "arxiv:2203.02155", "region:us" ]
2023-11-15T17:58:04+00:00
{"language": ["bn"], "license": "cc-by-nc-4.0", "task_categories": ["conversational", "text-generation"], "pretty_name": "Bengali Chat", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66596881, "num_examples": 9500}, {"name": "test", "num_bytes": 3573980, "num_examples": 500}], "download_size": 27678311, "dataset_size": 70170861}}
2023-11-16T09:14:55+00:00
[ "2203.02155" ]
[ "bn" ]
TAGS #task_categories-conversational #task_categories-text-generation #language-Bengali #license-cc-by-nc-4.0 #arxiv-2203.02155 #region-us
Dataset Card for Bengali Chat ============================= We know that current English-first LLMs don’t work well for many other languages, both in terms of performance, latency, and speed. Building instruction datasets for non-English languages is an important challenge that needs to be solved. Dedicated towards addressing this problem, I release 2 new datasets rishiraj/bengalichat & rishiraj/hindichat of 10,000 instructions and demonstrations each. This data can be used for supervised fine-tuning (SFT) to make language multilingual models follow instructions better. ### Dataset Summary rishiraj/bengalichat was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is translated from HuggingFaceH4/no\_robots which comprised mostly of single-turn instructions across the following categories: ### Languages The data in rishiraj/bengalichat are in Bengali (BCP-47 bn). ### Data Fields The data fields are as follows: * 'prompt': Describes the task the model should perform. * 'prompt\_id': A unique ID for the prompt. * 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content. * 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding'). * 'text': Content of 'messages' in a format that is compatible with dataset\_text\_field of SFTTrainer. ### Data Splits ### Licensing Information The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
[ "### Dataset Summary\n\n\nrishiraj/bengalichat was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is translated from HuggingFaceH4/no\\_robots which comprised mostly of single-turn instructions across the following categories:", "### Languages\n\n\nThe data in rishiraj/bengalichat are in Bengali (BCP-47 bn).", "### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'prompt': Describes the task the model should perform.\n* 'prompt\\_id': A unique ID for the prompt.\n* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.\n* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').\n* 'text': Content of 'messages' in a format that is compatible with dataset\\_text\\_field of SFTTrainer.", "### Data Splits", "### Licensing Information\n\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-generation #language-Bengali #license-cc-by-nc-4.0 #arxiv-2203.02155 #region-us \n", "### Dataset Summary\n\n\nrishiraj/bengalichat was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is translated from HuggingFaceH4/no\\_robots which comprised mostly of single-turn instructions across the following categories:", "### Languages\n\n\nThe data in rishiraj/bengalichat are in Bengali (BCP-47 bn).", "### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'prompt': Describes the task the model should perform.\n* 'prompt\\_id': A unique ID for the prompt.\n* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.\n* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').\n* 'text': Content of 'messages' in a format that is compatible with dataset\\_text\\_field of SFTTrainer.", "### Data Splits", "### Licensing Information\n\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ 51, 67, 24, 139, 5, 26 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-generation #language-Bengali #license-cc-by-nc-4.0 #arxiv-2203.02155 #region-us \n### Dataset Summary\n\n\nrishiraj/bengalichat was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is translated from HuggingFaceH4/no\\_robots which comprised mostly of single-turn instructions across the following categories:### Languages\n\n\nThe data in rishiraj/bengalichat are in Bengali (BCP-47 bn).### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'prompt': Describes the task the model should perform.\n* 'prompt\\_id': A unique ID for the prompt.\n* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.\n* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').\n* 'text': Content of 'messages' in a format that is compatible with dataset\\_text\\_field of SFTTrainer.### Data Splits### Licensing Information\n\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
fbb1f7b22b8724c020b8a068f65463a56f2932eb
# Dataset Card for "code_search_net_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jinaai/code_search_net_clean
[ "region:us" ]
2023-11-15T17:58:52+00:00
{"dataset_info": {"features": [{"name": "doc", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train.go", "num_bytes": 196497807, "num_examples": 311516}, {"name": "train.java", "num_bytes": 404530092, "num_examples": 417492}, {"name": "train.javascript", "num_bytes": 128940104, "num_examples": 109037}, {"name": "train.php", "num_bytes": 414721432, "num_examples": 466044}, {"name": "train.python", "num_bytes": 415251838, "num_examples": 389480}, {"name": "train.ruby", "num_bytes": 36223265, "num_examples": 45902}, {"name": "test.go", "num_bytes": 8383012, "num_examples": 14079}, {"name": "test.java", "num_bytes": 22856950, "num_examples": 24246}, {"name": "test.javascript", "num_bytes": 6602003, "num_examples": 5706}, {"name": "test.php", "num_bytes": 21589553, "num_examples": 25139}, {"name": "test.python", "num_bytes": 22653368, "num_examples": 21066}, {"name": "test.ruby", "num_bytes": 1652983, "num_examples": 2085}, {"name": "validation.go", "num_bytes": 6844448, "num_examples": 14051}, {"name": "validation.java", "num_bytes": 12179837, "num_examples": 14179}, {"name": "validation.javascript", "num_bytes": 7791459, "num_examples": 6995}, {"name": "validation.php", "num_bytes": 20661926, "num_examples": 22931}, {"name": "validation.python", "num_bytes": 24904263, "num_examples": 21955}, {"name": "validation.ruby", "num_bytes": 1504017, "num_examples": 2110}], "download_size": 632494450, "dataset_size": 1753788357}, "configs": [{"config_name": "default", "data_files": [{"split": "train.go", "path": "data/train.go-*"}, {"split": "train.java", "path": "data/train.java-*"}, {"split": "train.javascript", "path": "data/train.javascript-*"}, {"split": "train.php", "path": "data/train.php-*"}, {"split": "train.python", "path": "data/train.python-*"}, {"split": "train.ruby", "path": "data/train.ruby-*"}, {"split": "test.go", "path": "data/test.go-*"}, {"split": "test.java", "path": "data/test.java-*"}, {"split": "test.javascript", "path": "data/test.javascript-*"}, {"split": "test.php", "path": "data/test.php-*"}, {"split": "test.python", "path": "data/test.python-*"}, {"split": "test.ruby", "path": "data/test.ruby-*"}, {"split": "validation.go", "path": "data/validation.go-*"}, {"split": "validation.java", "path": "data/validation.java-*"}, {"split": "validation.javascript", "path": "data/validation.javascript-*"}, {"split": "validation.php", "path": "data/validation.php-*"}, {"split": "validation.python", "path": "data/validation.python-*"}, {"split": "validation.ruby", "path": "data/validation.ruby-*"}]}]}
2023-12-11T10:42:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "code_search_net_clean" More Information needed
[ "# Dataset Card for \"code_search_net_clean\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"code_search_net_clean\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"code_search_net_clean\"\n\nMore Information needed" ]
c0aa01cb06a356d0727d1c344cf98f502c8b51f1
# Dataset Card for "nq_open-validation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
seonglae/nq_open-validation
[ "region:us" ]
2023-11-15T18:04:11+00:00
{"dataset_info": [{"config_name": "psgs_w100.dpr_nq.10_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 22461013, "num_examples": 3610}], "download_size": 13069117, "dataset_size": 22461013}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 22467109, "num_examples": 3610}], "download_size": 13086005, "dataset_size": 22467109}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo-0613.v14.1_gpt-3.5-turbo-0613", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2087880, "num_examples": 310}], "download_size": 1237537, "dataset_size": 2087880}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v10.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 26861126, "num_examples": 3610}], "download_size": 15725766, "dataset_size": 26861126}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v11.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 26223973, "num_examples": 3610}], "download_size": 15352225, "dataset_size": 26223973}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v12.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 25939432, "num_examples": 3610}], "download_size": 15190249, "dataset_size": 25939432}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v13.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 24433085, "num_examples": 3610}], "download_size": 14325258, "dataset_size": 24433085}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v14.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2094932, "num_examples": 310}], "download_size": 1241315, "dataset_size": 2094932}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v2.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 26544687, "num_examples": 3610}], "download_size": 15437864, "dataset_size": 26544687}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v4.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 23377632, "num_examples": 3610}], "download_size": 13650697, "dataset_size": 23377632}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v5.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 26856985, "num_examples": 3610}], "download_size": 15278785, "dataset_size": 26856985}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v6.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 25239381, "num_examples": 3610}], "download_size": 14754952, "dataset_size": 25239381}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v7.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 25848679, "num_examples": 3610}], "download_size": 15113242, "dataset_size": 25848679}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v8.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2390519, "num_examples": 310}], "download_size": 1407118, "dataset_size": 2390519}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v9.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 26517453, "num_examples": 3610}], "download_size": 15506895, "dataset_size": 26517453}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v9.1_gpt-3.5-turbo^2", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 6627672, "num_examples": 3610}], "download_size": 4071551, "dataset_size": 6627672}, {"config_name": "psgs_w100.dpr_nq.10_lsg-bart-base-4096-booksum.1_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 29791480, "num_examples": 3610}], "download_size": 15853782, "dataset_size": 29791480}, {"config_name": "psgs_w100.dpr_nq.10_resrer-bart-base.1_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 28798929, "num_examples": 3610}], "download_size": 17159760, "dataset_size": 28798929}, {"config_name": "psgs_w100.dpr_nq.10_resrer-bart-base.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 28807451, "num_examples": 3610}], "download_size": 17141310, "dataset_size": 28807451}, {"config_name": "psgs_w100.dpr_nq.10_resrer-bart.1_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 28152955, "num_examples": 3610}], "download_size": 16764384, "dataset_size": 28152955}, {"config_name": "psgs_w100.dpr_nq.10_resrer-pegasus-x.1_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 30337257, "num_examples": 3610}], "download_size": 17579283, "dataset_size": 30337257}, {"config_name": "psgs_w100.dpr_nq.16_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 35717523, "num_examples": 3610}], "download_size": 20586030, "dataset_size": 35717523}, {"config_name": "psgs_w100.dpr_nq.16_gpt-3.5-turbo-1106", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 35725719, "num_examples": 3610}], "download_size": 20591943, "dataset_size": 35725719}, {"config_name": "psgs_w100.dpr_nq.16_gpt-3.5-turbo.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 39715501, "num_examples": 3610}], "download_size": 22897675, "dataset_size": 39715501}, {"config_name": "psgs_w100.dpr_nq.1_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2611003, "num_examples": 3610}], "download_size": 1750992, "dataset_size": 2611003}, {"config_name": "psgs_w100.dpr_nq.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2610265, "num_examples": 3610}], "download_size": 1722596, "dataset_size": 2610265}, {"config_name": "psgs_w100.dpr_nq.1_gpt-3.5-turbo-1106", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2615340, "num_examples": 3610}], "download_size": 1725535, "dataset_size": 2615340}, {"config_name": "psgs_w100.dpr_nq.1_gpt-3.5-turbo.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 3942936, "num_examples": 3610}], "download_size": 2560707, "dataset_size": 3942936}, {"config_name": "psgs_w100.dpr_nq.2_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 4818524, "num_examples": 3610}], "download_size": 3003043, "dataset_size": 4818524}, {"config_name": "psgs_w100.dpr_nq.2_gpt-3.5-turbo-1106", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 4821928, "num_examples": 3610}], "download_size": 3005331, "dataset_size": 4821928}, {"config_name": "psgs_w100.dpr_nq.2_gpt-3.5-turbo.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 7309654, "num_examples": 3610}], "download_size": 4513358, "dataset_size": 7309654}, {"config_name": "psgs_w100.dpr_nq.32_gpt-3.5-turbo-1106", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 71070919, "num_examples": 3610}], "download_size": 40481294, "dataset_size": 71070919}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1559766, "num_examples": 610}], "download_size": 949226, "dataset_size": 1559766}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo-1106", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 9233569, "num_examples": 3610}], "download_size": 5573719, "dataset_size": 9233569}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo-1106.1_gpt-3.5-turbo-1106", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 11235173, "num_examples": 3610}], "download_size": 6867851, "dataset_size": 11235173}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 11993878, "num_examples": 3610}], "download_size": 7286826, "dataset_size": 11993878}, {"config_name": "psgs_w100.dpr_nq.4_resrer-bart-sep.1_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 14905001, "num_examples": 3610}], "download_size": 9271126, "dataset_size": 14905001}, {"config_name": "psgs_w100.dpr_nq.4_resrer-bart.1_dpr-reader-single-nq-base", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 14523768, "num_examples": 3610}], "download_size": 9011695, "dataset_size": 14523768}, {"config_name": "psgs_w100.dpr_nq.8_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 18057144, "num_examples": 3610}], "download_size": 10569661, "dataset_size": 18057144}, {"config_name": "psgs_w100.dpr_nq.8_gpt-3.5-turbo-1106", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "null"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 18064056, "num_examples": 3610}], "download_size": 10575936, "dataset_size": 18064056}, {"config_name": "psgs_w100.dpr_nq.8_gpt-3.5-turbo.1_gpt-3.5-turbo", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "retrieved", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "predicted", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 22193361, "num_examples": 3610}], "download_size": 12980789, "dataset_size": 22193361}], "configs": [{"config_name": "psgs_w100.dpr_nq.10_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo-0613.v14.1_gpt-3.5-turbo-0613", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo-0613.v14.1_gpt-3.5-turbo-0613/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v10.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v10.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v11.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v11.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v12.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v12.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v13.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v13.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v14.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v14.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v2.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v2.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v4.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v4.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v5.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v5.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v6.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v6.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v7.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v7.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v8.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v8.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v9.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v9.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v9.1_gpt-3.5-turbo^2", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_gpt-3.5-turbo.v9.1_gpt-3.5-turbo^2/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_lsg-bart-base-4096-booksum.1_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_lsg-bart-base-4096-booksum.1_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_resrer-bart-base.1_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_resrer-bart-base.1_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_resrer-bart-base.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_resrer-bart-base.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_resrer-bart.1_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_resrer-bart.1_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.10_resrer-pegasus-x.1_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.10_resrer-pegasus-x.1_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.16_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.16_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.16_gpt-3.5-turbo-1106", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.16_gpt-3.5-turbo-1106/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.16_gpt-3.5-turbo.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.16_gpt-3.5-turbo.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.1_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.1_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.1_gpt-3.5-turbo-1106", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.1_gpt-3.5-turbo-1106/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.1_gpt-3.5-turbo.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.1_gpt-3.5-turbo.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.2_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.2_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.2_gpt-3.5-turbo-1106", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.2_gpt-3.5-turbo-1106/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.2_gpt-3.5-turbo.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.2_gpt-3.5-turbo.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.32_gpt-3.5-turbo-1106", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.32_gpt-3.5-turbo-1106/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.4_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo-1106", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.4_gpt-3.5-turbo-1106/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo-1106.1_gpt-3.5-turbo-1106", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.4_gpt-3.5-turbo-1106.1_gpt-3.5-turbo-1106/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.4_gpt-3.5-turbo.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.4_gpt-3.5-turbo.1_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.4_resrer-bart-sep.1_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.4_resrer-bart-sep.1_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.4_resrer-bart.1_dpr-reader-single-nq-base", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.4_resrer-bart.1_dpr-reader-single-nq-base/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.8_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.8_gpt-3.5-turbo/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.8_gpt-3.5-turbo-1106", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.8_gpt-3.5-turbo-1106/train-*"}]}, {"config_name": "psgs_w100.dpr_nq.8_gpt-3.5-turbo.1_gpt-3.5-turbo", "data_files": [{"split": "train", "path": "psgs_w100.dpr_nq.8_gpt-3.5-turbo.1_gpt-3.5-turbo/train-*"}]}]}
2024-02-09T06:33:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "nq_open-validation" More Information needed
[ "# Dataset Card for \"nq_open-validation\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"nq_open-validation\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"nq_open-validation\"\n\nMore Information needed" ]
762a85082f473bbc7981b2874b34222a3c5fabac
# Descrição geral O seguinte dataset, responsável pelo treinamento do modelo apelidado de Dr Byte, é um dataset, com informações da área da saúde, para o fine tuning com instruções de modelos de linguagem. <br> <br> Além disso, os datasets contam com dúvidas gerais de pacientes, dúvidas sobre medicamentos, questões de múltipla escolha de vestibulares de medicina, dados de biomedicina, dentre outros. <br> # Construção do Dataset O dataset foi construído a partir da tradução, do inglês para o português, dos seguintes datasets (nem todos foram utilizados em sua totalidade): ## Treino: - MedQA (USMLE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (10082 dados) <br> - MedMCQA, que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (9736 dados) <br> - LiveQA, que contém dúvidas de conhecimentos médicos gerais, provenientes de pessoas que não são da área. (622 dados) <br> - MedicationQA, que contém dúvidas frequentes sobre medicamentos, provenientes de pessoas que não são da área. (687 dados) <br> <br> <br> - Total de dados de treino: 21127 dados. ## Teste - MedMCQA (SPLIT DE VALIDAÇÃO), que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (4183 dados) <br> - MedQA (USMLE) (SPLIT DE TESTE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (1273 dados) <br> - PubMedQA (SPLIT DE TESTE), que contém dados da literatura científica de biomedicina. (500 dados) <br> - MMLU (SPLIT DE TESTE), que cobre questões de múltipla escolha acerca de conhecimento médico, cobrindo os seguintes temas: anatomia, conhecimento clínico, questões de faculdade de medicina, genética médica, questões medicina profissional e biologia universitária.(1089 dados) <br> <br> <br> - Total de dados de teste: 7045 dados. # Características dos dados: Os datasets possuem as seguintes features, para cada split do dataset: <br> ## Treino: - 'instruction': é a instrução em si, geralmente é uma pergunta ou uma questão de múltipla escolha junto com suas respectivas alternativas. <br> - 'output': é a resposta esperada para a instrução, pode ser uma resposta direta, uma alternativa e/ou uma explicação a respeito da alternativa. <br> ## Teste: - 'dataset': o dataset de onde o dado é proveniente. <br> - 'instruction': a instrução em si. <br> - 'input': pode estar presente ou não (geralmente só no dataset PubMedQA) e é um contexto adicional para a resolução da instrução. <br> - 'output': é a resposta esperada para a instrução. <br> - 'alternativa_a': o texto da alternativa A, quando está presente. <br> - 'alternativa_b': o texto da alternativa B, quando está presente. <br> - 'alternativa_c': o texto da alternativa C, quando está presente. <br> - 'alternativa_d': o texto da alternativa D, quando está presente. <br>
luizlzg/drbyte_dataset
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:pt", "medical", "biology", "region:us" ]
2023-11-15T18:17:43+00:00
{"language": ["pt"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["medical", "biology"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "drbyte_ptbr_treino*"}, {"split": "test", "path": "drbyte_ptbr_teste*"}, {"split": "validation", "path": "drbyte_ptbr_valid*"}]}]}
2023-12-04T04:44:29+00:00
[]
[ "pt" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-Portuguese #medical #biology #region-us
# Descrição geral O seguinte dataset, responsável pelo treinamento do modelo apelidado de Dr Byte, é um dataset, com informações da área da saúde, para o fine tuning com instruções de modelos de linguagem. <br> <br> Além disso, os datasets contam com dúvidas gerais de pacientes, dúvidas sobre medicamentos, questões de múltipla escolha de vestibulares de medicina, dados de biomedicina, dentre outros. <br> # Construção do Dataset O dataset foi construído a partir da tradução, do inglês para o português, dos seguintes datasets (nem todos foram utilizados em sua totalidade): ## Treino: - MedQA (USMLE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (10082 dados) <br> - MedMCQA, que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (9736 dados) <br> - LiveQA, que contém dúvidas de conhecimentos médicos gerais, provenientes de pessoas que não são da área. (622 dados) <br> - MedicationQA, que contém dúvidas frequentes sobre medicamentos, provenientes de pessoas que não são da área. (687 dados) <br> <br> <br> - Total de dados de treino: 21127 dados. ## Teste - MedMCQA (SPLIT DE VALIDAÇÃO), que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (4183 dados) <br> - MedQA (USMLE) (SPLIT DE TESTE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (1273 dados) <br> - PubMedQA (SPLIT DE TESTE), que contém dados da literatura científica de biomedicina. (500 dados) <br> - MMLU (SPLIT DE TESTE), que cobre questões de múltipla escolha acerca de conhecimento médico, cobrindo os seguintes temas: anatomia, conhecimento clínico, questões de faculdade de medicina, genética médica, questões medicina profissional e biologia universitária.(1089 dados) <br> <br> <br> - Total de dados de teste: 7045 dados. # Características dos dados: Os datasets possuem as seguintes features, para cada split do dataset: <br> ## Treino: - 'instruction': é a instrução em si, geralmente é uma pergunta ou uma questão de múltipla escolha junto com suas respectivas alternativas. <br> - 'output': é a resposta esperada para a instrução, pode ser uma resposta direta, uma alternativa e/ou uma explicação a respeito da alternativa. <br> ## Teste: - 'dataset': o dataset de onde o dado é proveniente. <br> - 'instruction': a instrução em si. <br> - 'input': pode estar presente ou não (geralmente só no dataset PubMedQA) e é um contexto adicional para a resolução da instrução. <br> - 'output': é a resposta esperada para a instrução. <br> - 'alternativa_a': o texto da alternativa A, quando está presente. <br> - 'alternativa_b': o texto da alternativa B, quando está presente. <br> - 'alternativa_c': o texto da alternativa C, quando está presente. <br> - 'alternativa_d': o texto da alternativa D, quando está presente. <br>
[ "# Descrição geral\n\nO seguinte dataset, responsável pelo treinamento do modelo apelidado de Dr Byte, é um dataset, com informações da área da saúde, para o fine tuning com instruções de modelos de linguagem. <br> <br>\nAlém disso, os datasets contam com dúvidas gerais de pacientes, dúvidas sobre medicamentos, questões de múltipla escolha de vestibulares de medicina, dados de biomedicina, dentre outros. <br>", "# Construção do Dataset\n\nO dataset foi construído a partir da tradução, do inglês para o português, dos seguintes datasets (nem todos foram utilizados em sua totalidade):", "## Treino:\n- MedQA (USMLE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (10082 dados) <br>\n- MedMCQA, que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (9736 dados) <br>\n- LiveQA, que contém dúvidas de conhecimentos médicos gerais, provenientes de pessoas que não são da área. (622 dados) <br>\n- MedicationQA, que contém dúvidas frequentes sobre medicamentos, provenientes de pessoas que não são da área. (687 dados) <br> <br> <br>\n- Total de dados de treino: 21127 dados.", "## Teste\n\n- MedMCQA (SPLIT DE VALIDAÇÃO), que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (4183 dados) <br>\n- MedQA (USMLE) (SPLIT DE TESTE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (1273 dados) <br>\n- PubMedQA (SPLIT DE TESTE), que contém dados da literatura científica de biomedicina. (500 dados) <br>\n- MMLU (SPLIT DE TESTE), que cobre questões de múltipla escolha acerca de conhecimento médico, cobrindo os seguintes temas: anatomia, conhecimento clínico, questões de faculdade de medicina, genética médica, questões medicina profissional e biologia universitária.(1089 dados) <br> <br> <br>\n- Total de dados de teste: 7045 dados.", "# Características dos dados:\n\nOs datasets possuem as seguintes features, para cada split do dataset: <br>", "## Treino:\n- 'instruction': é a instrução em si, geralmente é uma pergunta ou uma questão de múltipla escolha junto com suas respectivas alternativas. <br>\n- 'output': é a resposta esperada para a instrução, pode ser uma resposta direta, uma alternativa e/ou uma explicação a respeito da alternativa. <br>", "## Teste:\n- 'dataset': o dataset de onde o dado é proveniente. <br>\n- 'instruction': a instrução em si. <br>\n- 'input': pode estar presente ou não (geralmente só no dataset PubMedQA) e é um contexto adicional para a resolução da instrução. <br>\n- 'output': é a resposta esperada para a instrução. <br>\n- 'alternativa_a': o texto da alternativa A, quando está presente. <br>\n- 'alternativa_b': o texto da alternativa B, quando está presente. <br>\n- 'alternativa_c': o texto da alternativa C, quando está presente. <br>\n- 'alternativa_d': o texto da alternativa D, quando está presente. <br>" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Portuguese #medical #biology #region-us \n", "# Descrição geral\n\nO seguinte dataset, responsável pelo treinamento do modelo apelidado de Dr Byte, é um dataset, com informações da área da saúde, para o fine tuning com instruções de modelos de linguagem. <br> <br>\nAlém disso, os datasets contam com dúvidas gerais de pacientes, dúvidas sobre medicamentos, questões de múltipla escolha de vestibulares de medicina, dados de biomedicina, dentre outros. <br>", "# Construção do Dataset\n\nO dataset foi construído a partir da tradução, do inglês para o português, dos seguintes datasets (nem todos foram utilizados em sua totalidade):", "## Treino:\n- MedQA (USMLE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (10082 dados) <br>\n- MedMCQA, que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (9736 dados) <br>\n- LiveQA, que contém dúvidas de conhecimentos médicos gerais, provenientes de pessoas que não são da área. (622 dados) <br>\n- MedicationQA, que contém dúvidas frequentes sobre medicamentos, provenientes de pessoas que não são da área. (687 dados) <br> <br> <br>\n- Total de dados de treino: 21127 dados.", "## Teste\n\n- MedMCQA (SPLIT DE VALIDAÇÃO), que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (4183 dados) <br>\n- MedQA (USMLE) (SPLIT DE TESTE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (1273 dados) <br>\n- PubMedQA (SPLIT DE TESTE), que contém dados da literatura científica de biomedicina. (500 dados) <br>\n- MMLU (SPLIT DE TESTE), que cobre questões de múltipla escolha acerca de conhecimento médico, cobrindo os seguintes temas: anatomia, conhecimento clínico, questões de faculdade de medicina, genética médica, questões medicina profissional e biologia universitária.(1089 dados) <br> <br> <br>\n- Total de dados de teste: 7045 dados.", "# Características dos dados:\n\nOs datasets possuem as seguintes features, para cada split do dataset: <br>", "## Treino:\n- 'instruction': é a instrução em si, geralmente é uma pergunta ou uma questão de múltipla escolha junto com suas respectivas alternativas. <br>\n- 'output': é a resposta esperada para a instrução, pode ser uma resposta direta, uma alternativa e/ou uma explicação a respeito da alternativa. <br>", "## Teste:\n- 'dataset': o dataset de onde o dado é proveniente. <br>\n- 'instruction': a instrução em si. <br>\n- 'input': pode estar presente ou não (geralmente só no dataset PubMedQA) e é um contexto adicional para a resolução da instrução. <br>\n- 'output': é a resposta esperada para a instrução. <br>\n- 'alternativa_a': o texto da alternativa A, quando está presente. <br>\n- 'alternativa_b': o texto da alternativa B, quando está presente. <br>\n- 'alternativa_c': o texto da alternativa C, quando está presente. <br>\n- 'alternativa_d': o texto da alternativa D, quando está presente. <br>" ]
[ 41, 99, 38, 146, 189, 26, 77, 177 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Portuguese #medical #biology #region-us \n# Descrição geral\n\nO seguinte dataset, responsável pelo treinamento do modelo apelidado de Dr Byte, é um dataset, com informações da área da saúde, para o fine tuning com instruções de modelos de linguagem. <br> <br>\nAlém disso, os datasets contam com dúvidas gerais de pacientes, dúvidas sobre medicamentos, questões de múltipla escolha de vestibulares de medicina, dados de biomedicina, dentre outros. <br># Construção do Dataset\n\nO dataset foi construído a partir da tradução, do inglês para o português, dos seguintes datasets (nem todos foram utilizados em sua totalidade):## Treino:\n- MedQA (USMLE), que contém conhecimentos médicos gerais do exame de licenciamento médico dos EUA. (10082 dados) <br>\n- MedMCQA, que contém conhecimentos médicos gerais de vestibulares de medicina indianos. (9736 dados) <br>\n- LiveQA, que contém dúvidas de conhecimentos médicos gerais, provenientes de pessoas que não são da área. (622 dados) <br>\n- MedicationQA, que contém dúvidas frequentes sobre medicamentos, provenientes de pessoas que não são da área. (687 dados) <br> <br> <br>\n- Total de dados de treino: 21127 dados." ]
0a052746e39ef2f7ea0a3c1a7c6228373095fdd6
# Dataset Card for "semeval-task-8-b-v2-mistral-7b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kpriyanshu256/semeval-task-8-b-v2-mistral-7b
[ "region:us" ]
2023-11-15T18:29:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "mistral-7b_estimated_loss", "dtype": "float64"}, {"name": "mistral-7b_mean_lowest25", "dtype": "float64"}, {"name": "mistral-7b_mean_highest25", "dtype": "float64"}, {"name": "mistral-7b_max", "dtype": "float64"}, {"name": "mistral-7b_min", "dtype": "float64"}, {"name": "mistral-7b_range", "dtype": "float64"}, {"name": "mistral-7b_mean", "dtype": "float64"}, {"name": "mistral-7b_std", "dtype": "float64"}, {"name": "mistral-7b_entropy", "dtype": "float64"}, {"name": "mistral-7b_kurtosis", "dtype": "float64"}, {"name": "mistral-7b_skewness", "dtype": "float64"}, {"name": "mistral-7b_perplexity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 127022360, "num_examples": 56821}, {"name": "val", "num_bytes": 31364223, "num_examples": 14206}, {"name": "test", "num_bytes": 5102312, "num_examples": 3000}], "download_size": 96394782, "dataset_size": 163488895}}
2023-11-15T18:29:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "semeval-task-8-b-v2-mistral-7b" More Information needed
[ "# Dataset Card for \"semeval-task-8-b-v2-mistral-7b\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"semeval-task-8-b-v2-mistral-7b\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-b-v2-mistral-7b\"\n\nMore Information needed" ]
7e18f1b98859b0a3e3da283f63d0a153b774cf1f
# Dataset Card for "dl_binary_reg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lhallee/dl_binary_reg
[ "region:us" ]
2023-11-15T18:32:54+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2692075, "num_examples": 5473}, {"name": "valid", "num_bytes": 653234, "num_examples": 1335}, {"name": "test", "num_bytes": 905979, "num_examples": 1729}], "download_size": 4189564, "dataset_size": 4251288}}
2023-11-15T18:33:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dl_binary_reg" More Information needed
[ "# Dataset Card for \"dl_binary_reg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dl_binary_reg\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dl_binary_reg\"\n\nMore Information needed" ]
99067ca4af22c40f3f22be1118f30c11f9e62dd5
# Dataset Card for "dl_binary_fold" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lhallee/dl_binary_fold
[ "region:us" ]
2023-11-15T18:33:03+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5320998, "num_examples": 5477}, {"name": "valid", "num_bytes": 1291124, "num_examples": 1336}, {"name": "test", "num_bytes": 1794006, "num_examples": 1731}], "download_size": 7278763, "dataset_size": 8406128}}
2023-11-15T18:33:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dl_binary_fold" More Information needed
[ "# Dataset Card for \"dl_binary_fold\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dl_binary_fold\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dl_binary_fold\"\n\nMore Information needed" ]
45049a126958c8cd347bc8c223ba0910c002277f
# Dataset Card for "MetalIonBinding_fold" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lhallee/MetalIonBinding_fold
[ "region:us" ]
2023-11-15T18:35:51+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 3062006, "num_examples": 5067}, {"name": "valid", "num_bytes": 403822, "num_examples": 662}, {"name": "test", "num_bytes": 387806, "num_examples": 665}], "download_size": 2865074, "dataset_size": 3853634}}
2023-11-15T18:35:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "MetalIonBinding_fold" More Information needed
[ "# Dataset Card for \"MetalIonBinding_fold\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"MetalIonBinding_fold\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"MetalIonBinding_fold\"\n\nMore Information needed" ]
ed94bf1cd2e614e18e439ff0195fd37bef16b09f
# Dataset Card for "MetalIonBinding_reg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lhallee/MetalIonBinding_reg
[ "region:us" ]
2023-11-15T18:35:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1561586, "num_examples": 5068}, {"name": "valid", "num_bytes": 205883, "num_examples": 662}, {"name": "test", "num_bytes": 197893, "num_examples": 665}], "download_size": 1600987, "dataset_size": 1965362}}
2023-11-15T18:35:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "MetalIonBinding_reg" More Information needed
[ "# Dataset Card for \"MetalIonBinding_reg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"MetalIonBinding_reg\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"MetalIonBinding_reg\"\n\nMore Information needed" ]
c076844a9ce23aee49aefba450cd404f75a0409c
# Dataset Card for "dl_ten_reg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lhallee/dl_ten_reg
[ "region:us" ]
2023-11-15T18:36:03+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4494546, "num_examples": 8743}, {"name": "valid", "num_bytes": 1132147, "num_examples": 2190}, {"name": "test", "num_bytes": 1487111, "num_examples": 2745}], "download_size": 7016840, "dataset_size": 7113804}}
2023-11-15T18:36:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dl_ten_reg" More Information needed
[ "# Dataset Card for \"dl_ten_reg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dl_ten_reg\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dl_ten_reg\"\n\nMore Information needed" ]
ce3940e236419c46e930cf465324fb1ca13f70de
# Dataset Card for "dl_ten_fold" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lhallee/dl_ten_fold
[ "region:us" ]
2023-11-15T18:36:09+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 8887338, "num_examples": 8747}, {"name": "valid", "num_bytes": 2238690, "num_examples": 2191}, {"name": "test", "num_bytes": 2944078, "num_examples": 2747}], "download_size": 11992806, "dataset_size": 14070106}}
2023-11-15T18:36:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dl_ten_fold" More Information needed
[ "# Dataset Card for \"dl_ten_fold\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dl_ten_fold\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dl_ten_fold\"\n\nMore Information needed" ]
e66c8b80aed4b657ceef924f835a66fd072c2567
# Dataset Card for "semeval-task-8-a-multi-v2-mistral-7b" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kpriyanshu256/semeval-task-8-a-multi-v2-mistral-7b
[ "region:us" ]
2023-11-15T18:37:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "mistral-7b_estimated_loss", "dtype": "float64"}, {"name": "mistral-7b_mean_lowest25", "dtype": "float64"}, {"name": "mistral-7b_mean_highest25", "dtype": "float64"}, {"name": "mistral-7b_max", "dtype": "float64"}, {"name": "mistral-7b_min", "dtype": "float64"}, {"name": "mistral-7b_range", "dtype": "float64"}, {"name": "mistral-7b_mean", "dtype": "float64"}, {"name": "mistral-7b_std", "dtype": "float64"}, {"name": "mistral-7b_entropy", "dtype": "float64"}, {"name": "mistral-7b_kurtosis", "dtype": "float64"}, {"name": "mistral-7b_skewness", "dtype": "float64"}, {"name": "mistral-7b_perplexity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 375470839, "num_examples": 137933}, {"name": "val", "num_bytes": 93824169, "num_examples": 34484}, {"name": "test", "num_bytes": 9174338, "num_examples": 4000}], "download_size": 285038772, "dataset_size": 478469346}}
2023-11-15T18:37:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "semeval-task-8-a-multi-v2-mistral-7b" More Information needed
[ "# Dataset Card for \"semeval-task-8-a-multi-v2-mistral-7b\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"semeval-task-8-a-multi-v2-mistral-7b\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-a-multi-v2-mistral-7b\"\n\nMore Information needed" ]
13fe8e76b5bd81433e1b29c5d987e65c65952743
# Dataset Card for "slim_pajama_chunk_2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rokset3/slim_pajama_chunk_2
[ "region:us" ]
2023-11-15T18:40:56+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 258571513240, "num_examples": 58982360}], "download_size": 150404827683, "dataset_size": 258571513240}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T20:14:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "slim_pajama_chunk_2" More Information needed
[ "# Dataset Card for \"slim_pajama_chunk_2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"slim_pajama_chunk_2\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"slim_pajama_chunk_2\"\n\nMore Information needed" ]
4f6df69c9c912cc096b0251a7da79a3b44b017f5
# Dataset Card for "ScreeningData" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Lollitor/ScreeningData
[ "region:us" ]
2023-11-15T18:47:44+00:00
{"dataset_info": {"features": [{"name": "#code_ligand_num", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18728004, "num_examples": 28297}], "download_size": 342193, "dataset_size": 18728004}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T18:47:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ScreeningData" More Information needed
[ "# Dataset Card for \"ScreeningData\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ScreeningData\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ScreeningData\"\n\nMore Information needed" ]
cd3e4592f71404d1db255b5c6daf340ef57d0755
# Dataset Card for "gpt_gen_desc_logos" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mdass/gpt_gen_desc_logos
[ "region:us" ]
2023-11-15T18:49:18+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2006069.0, "num_examples": 100}], "download_size": 1991578, "dataset_size": 2006069.0}}
2023-11-16T02:10:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gpt_gen_desc_logos" More Information needed
[ "# Dataset Card for \"gpt_gen_desc_logos\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gpt_gen_desc_logos\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gpt_gen_desc_logos\"\n\nMore Information needed" ]
b3883794717b54156ab00b56c6b51f1f105e4526
# Dataset Card for "warehouse-detection" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alec-groff/warehouse-detection
[ "region:us" ]
2023-11-15T18:55:58+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "categories", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 2551114, "num_examples": 4789}, {"name": "validation", "num_bytes": 216128, "num_examples": 459}, {"name": "test", "num_bytes": 128909, "num_examples": 229}], "download_size": 526880, "dataset_size": 2896151}}
2023-11-15T18:56:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "warehouse-detection" More Information needed
[ "# Dataset Card for \"warehouse-detection\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"warehouse-detection\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"warehouse-detection\"\n\nMore Information needed" ]
9814c891bcbf233fb0aa858aa50bfab046335356
# Indonesian News Datasets The dataset compiles information from seven prominent Indonesian news platforms: **Tempo, CNN Indonesia, CNBC Indonesia, Okezone, Suara, Kumparan, and JawaPos**. Each source contributes a diverse range of articles, collectively forming a comprehensive repository of Indonesian news content. The dataset consists of diverse columns: 'id' denotes the unique identifier for each news article, 'title' contains the headline, 'image' refers to the associated visual content, 'url' specifies the source link, 'content' encompasses the full text of the news article, 'embedding' houses the text embeddings extracted using the OpenAI `text-embedding-ada-002 model`, 'created_at' marks the creation timestamp, 'updated_at' signifies the last update timestamp, and 'summary' encapsulates the concise article summary generated via the ChatGPT API. These columns collectively represent a comprehensive set of information for each news article, facilitating various natural language processing tasks and analysis while providing enriched contextual details for effective exploration and modeling.
iqballx/indonesian_news_datasets
[ "task_categories:text-classification", "task_categories:summarization", "language:id", "license:cc-by-nc-4.0", "region:us" ]
2023-11-15T18:58:28+00:00
{"language": ["id"], "license": "cc-by-nc-4.0", "task_categories": ["text-classification", "summarization"]}
2023-11-15T19:32:32+00:00
[]
[ "id" ]
TAGS #task_categories-text-classification #task_categories-summarization #language-Indonesian #license-cc-by-nc-4.0 #region-us
# Indonesian News Datasets The dataset compiles information from seven prominent Indonesian news platforms: Tempo, CNN Indonesia, CNBC Indonesia, Okezone, Suara, Kumparan, and JawaPos. Each source contributes a diverse range of articles, collectively forming a comprehensive repository of Indonesian news content. The dataset consists of diverse columns: 'id' denotes the unique identifier for each news article, 'title' contains the headline, 'image' refers to the associated visual content, 'url' specifies the source link, 'content' encompasses the full text of the news article, 'embedding' houses the text embeddings extracted using the OpenAI 'text-embedding-ada-002 model', 'created_at' marks the creation timestamp, 'updated_at' signifies the last update timestamp, and 'summary' encapsulates the concise article summary generated via the ChatGPT API. These columns collectively represent a comprehensive set of information for each news article, facilitating various natural language processing tasks and analysis while providing enriched contextual details for effective exploration and modeling.
[ "# Indonesian News Datasets\nThe dataset compiles information from seven prominent Indonesian news platforms: Tempo, CNN Indonesia, CNBC Indonesia, Okezone, Suara, Kumparan, and JawaPos. Each source contributes a diverse range of articles, collectively forming a comprehensive repository of Indonesian news content. The dataset consists of diverse columns: 'id' denotes the unique identifier for each news article, 'title' contains the headline, 'image' refers to the associated visual content, 'url' specifies the source link, 'content' encompasses the full text of the news article, 'embedding' houses the text embeddings extracted using the OpenAI 'text-embedding-ada-002 model', 'created_at' marks the creation timestamp, 'updated_at' signifies the last update timestamp, and 'summary' encapsulates the concise article summary generated via the ChatGPT API. These columns collectively represent a comprehensive set of information for each news article, facilitating various natural language processing tasks and analysis while providing enriched contextual details for effective exploration and modeling." ]
[ "TAGS\n#task_categories-text-classification #task_categories-summarization #language-Indonesian #license-cc-by-nc-4.0 #region-us \n", "# Indonesian News Datasets\nThe dataset compiles information from seven prominent Indonesian news platforms: Tempo, CNN Indonesia, CNBC Indonesia, Okezone, Suara, Kumparan, and JawaPos. Each source contributes a diverse range of articles, collectively forming a comprehensive repository of Indonesian news content. The dataset consists of diverse columns: 'id' denotes the unique identifier for each news article, 'title' contains the headline, 'image' refers to the associated visual content, 'url' specifies the source link, 'content' encompasses the full text of the news article, 'embedding' houses the text embeddings extracted using the OpenAI 'text-embedding-ada-002 model', 'created_at' marks the creation timestamp, 'updated_at' signifies the last update timestamp, and 'summary' encapsulates the concise article summary generated via the ChatGPT API. These columns collectively represent a comprehensive set of information for each news article, facilitating various natural language processing tasks and analysis while providing enriched contextual details for effective exploration and modeling." ]
[ 43, 269 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-summarization #language-Indonesian #license-cc-by-nc-4.0 #region-us \n# Indonesian News Datasets\nThe dataset compiles information from seven prominent Indonesian news platforms: Tempo, CNN Indonesia, CNBC Indonesia, Okezone, Suara, Kumparan, and JawaPos. Each source contributes a diverse range of articles, collectively forming a comprehensive repository of Indonesian news content. The dataset consists of diverse columns: 'id' denotes the unique identifier for each news article, 'title' contains the headline, 'image' refers to the associated visual content, 'url' specifies the source link, 'content' encompasses the full text of the news article, 'embedding' houses the text embeddings extracted using the OpenAI 'text-embedding-ada-002 model', 'created_at' marks the creation timestamp, 'updated_at' signifies the last update timestamp, and 'summary' encapsulates the concise article summary generated via the ChatGPT API. These columns collectively represent a comprehensive set of information for each news article, facilitating various natural language processing tasks and analysis while providing enriched contextual details for effective exploration and modeling." ]
e9732a68d92287832de461358e3da38b0deec9d1
# Dataset Card for "ProteinSample" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Lollitor/ProteinSample
[ "region:us" ]
2023-11-15T19:18:09+00:00
{"dataset_info": {"features": [{"name": "PROTEIN", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22474, "num_examples": 50}], "download_size": 24517, "dataset_size": 22474}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T19:18:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ProteinSample" More Information needed
[ "# Dataset Card for \"ProteinSample\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ProteinSample\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ProteinSample\"\n\nMore Information needed" ]
80b452b038000c27735445e5ef4cbdb5b6f2c9f4
# Dataset Card for "dataset_demo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Sorenson-AILab-VSL/dataset_demo
[ "region:us" ]
2023-11-15T19:48:57+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1483436052.104, "num_examples": 39908}], "download_size": 1236946981, "dataset_size": 1483436052.104}}
2023-11-15T19:49:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dataset_demo" More Information needed
[ "# Dataset Card for \"dataset_demo\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dataset_demo\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dataset_demo\"\n\nMore Information needed" ]
61b5ec3f7801a0de7092f8c811347c58e1e366a9
# Financial Tweets - Cryptocurrency This dataset is part of the scraped financial tweets that I collected from a variety of financial influencers on Twitter, all the datasets can be found here: - Crypto: https://huggingface.co/datasets/StephanAkkerman/financial-tweets-crypto - Stocks (and forex): https://huggingface.co/datasets/StephanAkkerman/financial-tweets-stocks - Other (Tweet without cash tags): https://huggingface.co/datasets/StephanAkkerman/financial-tweets-other ## Data Fields The data fields are as follows: * `timestap`: The time the tweet was sent. * `tweet_text`: All of the text of the tweet, including quoted tweets (prefixed with `>`). * `tweet_url`: The URL of the tweet. * `tweet_type`: The type of tweet, this can be tweet, retweet, or quote tweet. * `price_of_ticker`: The price of the tickers mentioned. * `change_of_ticker`: The 24h price change of the tickers. * `tickers_mentioned`: All the tickers that are mentioned in the tweet. * `category`: What type of category the tweet has, the suffix `_images` means that the tweet included an image.
StephanAkkerman/financial-tweets-crypto
[ "task_categories:text-classification", "license:mit", "tweet", "tweets", "finance", "crypto", "fintwit", "region:us" ]
2023-11-15T19:56:31+00:00
{"license": "mit", "task_categories": ["text-classification"], "tags": ["tweet", "tweets", "finance", "crypto", "fintwit"]}
2023-12-07T15:32:44+00:00
[]
[]
TAGS #task_categories-text-classification #license-mit #tweet #tweets #finance #crypto #fintwit #region-us
# Financial Tweets - Cryptocurrency This dataset is part of the scraped financial tweets that I collected from a variety of financial influencers on Twitter, all the datasets can be found here: - Crypto: URL - Stocks (and forex): URL - Other (Tweet without cash tags): URL ## Data Fields The data fields are as follows: * 'timestap': The time the tweet was sent. * 'tweet_text': All of the text of the tweet, including quoted tweets (prefixed with '>'). * 'tweet_url': The URL of the tweet. * 'tweet_type': The type of tweet, this can be tweet, retweet, or quote tweet. * 'price_of_ticker': The price of the tickers mentioned. * 'change_of_ticker': The 24h price change of the tickers. * 'tickers_mentioned': All the tickers that are mentioned in the tweet. * 'category': What type of category the tweet has, the suffix '_images' means that the tweet included an image.
[ "# Financial Tweets - Cryptocurrency\nThis dataset is part of the scraped financial tweets that I collected from a variety of financial influencers on Twitter, all the datasets can be found here:\n- Crypto: URL\n- Stocks (and forex): URL\n- Other (Tweet without cash tags): URL", "## Data Fields\n\nThe data fields are as follows:\n\n* 'timestap': The time the tweet was sent.\n* 'tweet_text': All of the text of the tweet, including quoted tweets (prefixed with '>').\n* 'tweet_url': The URL of the tweet.\n* 'tweet_type': The type of tweet, this can be tweet, retweet, or quote tweet.\n* 'price_of_ticker': The price of the tickers mentioned.\n* 'change_of_ticker': The 24h price change of the tickers.\n* 'tickers_mentioned': All the tickers that are mentioned in the tweet.\n* 'category': What type of category the tweet has, the suffix '_images' means that the tweet included an image." ]
[ "TAGS\n#task_categories-text-classification #license-mit #tweet #tweets #finance #crypto #fintwit #region-us \n", "# Financial Tweets - Cryptocurrency\nThis dataset is part of the scraped financial tweets that I collected from a variety of financial influencers on Twitter, all the datasets can be found here:\n- Crypto: URL\n- Stocks (and forex): URL\n- Other (Tweet without cash tags): URL", "## Data Fields\n\nThe data fields are as follows:\n\n* 'timestap': The time the tweet was sent.\n* 'tweet_text': All of the text of the tweet, including quoted tweets (prefixed with '>').\n* 'tweet_url': The URL of the tweet.\n* 'tweet_type': The type of tweet, this can be tweet, retweet, or quote tweet.\n* 'price_of_ticker': The price of the tickers mentioned.\n* 'change_of_ticker': The 24h price change of the tickers.\n* 'tickers_mentioned': All the tickers that are mentioned in the tweet.\n* 'category': What type of category the tweet has, the suffix '_images' means that the tweet included an image." ]
[ 37, 68, 183 ]
[ "passage: TAGS\n#task_categories-text-classification #license-mit #tweet #tweets #finance #crypto #fintwit #region-us \n# Financial Tweets - Cryptocurrency\nThis dataset is part of the scraped financial tweets that I collected from a variety of financial influencers on Twitter, all the datasets can be found here:\n- Crypto: URL\n- Stocks (and forex): URL\n- Other (Tweet without cash tags): URL## Data Fields\n\nThe data fields are as follows:\n\n* 'timestap': The time the tweet was sent.\n* 'tweet_text': All of the text of the tweet, including quoted tweets (prefixed with '>').\n* 'tweet_url': The URL of the tweet.\n* 'tweet_type': The type of tweet, this can be tweet, retweet, or quote tweet.\n* 'price_of_ticker': The price of the tickers mentioned.\n* 'change_of_ticker': The 24h price change of the tickers.\n* 'tickers_mentioned': All the tickers that are mentioned in the tweet.\n* 'category': What type of category the tweet has, the suffix '_images' means that the tweet included an image." ]
7fc0ee7d227fb799f8255bac843839176a87d4c4
# Dataset Card for "slim_pajama_chunk_3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rokset3/slim_pajama_chunk_3
[ "region:us" ]
2023-11-15T20:15:45+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 257905858359, "num_examples": 59060327}], "download_size": 150003305835, "dataset_size": 257905858359}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T21:45:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "slim_pajama_chunk_3" More Information needed
[ "# Dataset Card for \"slim_pajama_chunk_3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"slim_pajama_chunk_3\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"slim_pajama_chunk_3\"\n\nMore Information needed" ]
87648db37550acf73878fe50804cbdc925ce94d3
# Dataset Card for "ibanity_lib" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Myrax3000/ibanity_lib
[ "region:us" ]
2023-11-15T20:21:28+00:00
{"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10959723, "num_examples": 1100}], "download_size": 3259364, "dataset_size": 10959723}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T20:24:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ibanity_lib" More Information needed
[ "# Dataset Card for \"ibanity_lib\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ibanity_lib\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ibanity_lib\"\n\nMore Information needed" ]
ed4ac7d22f6df36c3d02c1611705a958a9b35be5
This dataset comes from the EACL 2023 paper: An Empirical Study of Clinical Note Generation from Doctor-Patient Encounters https://github.com/abachaa/MTS-Dialog/tree/main/Main-Dataset
beanham/medsum
[ "task_categories:summarization", "size_categories:1K<n<10K", "language:en", "medical", "region:us" ]
2023-11-15T20:27:01+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["summarization"], "tags": ["medical"]}
2023-12-08T03:29:30+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #size_categories-1K<n<10K #language-English #medical #region-us
This dataset comes from the EACL 2023 paper: An Empirical Study of Clinical Note Generation from Doctor-Patient Encounters URL
[]
[ "TAGS\n#task_categories-summarization #size_categories-1K<n<10K #language-English #medical #region-us \n" ]
[ 35 ]
[ "passage: TAGS\n#task_categories-summarization #size_categories-1K<n<10K #language-English #medical #region-us \n" ]
631e4d719f31470174dd4213a13f7af903da4381
# Dataset Card for "prost_valid_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lhallee/prost_valid_test
[ "region:us" ]
2023-11-15T20:32:53+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "seqs", "dtype": "string"}], "splits": [{"name": "valid", "num_bytes": 603744, "num_examples": 1259}, {"name": "test", "num_bytes": 140994, "num_examples": 474}], "download_size": 679584, "dataset_size": 744738}}
2023-11-15T20:33:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "prost_valid_test" More Information needed
[ "# Dataset Card for \"prost_valid_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"prost_valid_test\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"prost_valid_test\"\n\nMore Information needed" ]
73c74c132cacdb2d18125b050bff39fdc4663c10
# Dataset Card for "multi-nli-NOB" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tollefj/multi-nli-NOB
[ "license:cc-by-4.0", "region:us" ]
2023-11-15T20:33:13+00:00
{"license": "cc-by-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation_matched", "path": "data/validation_matched-*"}, {"split": "validation_mismatched", "path": "data/validation_mismatched-*"}]}], "dataset_info": {"features": [{"name": "promptID", "dtype": "int32"}, {"name": "pairID", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 81317900, "num_examples": 392702}, {"name": "validation_matched", "num_bytes": 2010024, "num_examples": 9815}, {"name": "validation_mismatched", "num_bytes": 2121266, "num_examples": 9832}], "download_size": 56640779, "dataset_size": 85449190}}
2024-01-06T12:27:33+00:00
[]
[]
TAGS #license-cc-by-4.0 #region-us
# Dataset Card for "multi-nli-NOB" More Information needed
[ "# Dataset Card for \"multi-nli-NOB\"\n\nMore Information needed" ]
[ "TAGS\n#license-cc-by-4.0 #region-us \n", "# Dataset Card for \"multi-nli-NOB\"\n\nMore Information needed" ]
[ 15, 17 ]
[ "passage: TAGS\n#license-cc-by-4.0 #region-us \n# Dataset Card for \"multi-nli-NOB\"\n\nMore Information needed" ]
8a0478aff19a2112ee700553205b8be3a6bb6bfd
# European Union's Directives dataset (PT)
victorcosta/eu_directives
[ "language:pt", "region:us" ]
2023-11-15T20:39:49+00:00
{"language": "pt", "configs": [{"config_name": "ria_ground_truth", "data_files": "ria_ground_truth/train.parquet", "version": "1.0.5"}]}
2023-12-05T19:32:43+00:00
[]
[ "pt" ]
TAGS #language-Portuguese #region-us
# European Union's Directives dataset (PT)
[ "# European Union's Directives dataset (PT)" ]
[ "TAGS\n#language-Portuguese #region-us \n", "# European Union's Directives dataset (PT)" ]
[ 12, 12 ]
[ "passage: TAGS\n#language-Portuguese #region-us \n# European Union's Directives dataset (PT)" ]
c2b2598696c1d4a85d85c8704cc15b980e405185
Production like system dataset with telemetry data generated using dummy contenerized application and opentelemetry described by diagram below. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65180042c5585714bde11915/h_ATMojzhI19i8T15_6MN.png) Dataset is used for training and evaluating applications latency and error rate anomaly detection and semantic log analysis. --- license: mit ---
SzymonSt2808/ProductionLikeTelemetryData
[ "region:us" ]
2023-11-15T20:49:14+00:00
{}
2023-11-16T23:31:05+00:00
[]
[]
TAGS #region-us
Production like system dataset with telemetry data generated using dummy contenerized application and opentelemetry described by diagram below. !image/png Dataset is used for training and evaluating applications latency and error rate anomaly detection and semantic log analysis. --- license: mit ---
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
612b6828dbee1ceb66a9c7d110e7b4bb82c03b83
# Dataset Card for "BBH_eval" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bigheiniuJ/BBH_eval
[ "region:us" ]
2023-11-15T20:53:39+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2641563, "num_examples": 4071}], "download_size": 570189, "dataset_size": 2641563}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T20:55:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "BBH_eval" More Information needed
[ "# Dataset Card for \"BBH_eval\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"BBH_eval\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"BBH_eval\"\n\nMore Information needed" ]
02fc5a37fce5541f4f9ad91adb69f225f8363697
# HelpSteer: Helpfulness SteerLM Dataset HelpSteer is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses. Leveraging this dataset and SteerLM, we train a Llama 2 70B to reach **7.54** on MT Bench, the highest among models trained on open-source datasets based on [MT Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) as of 15 Nov 2023. This model is available on HF at [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat). Try this model instantly for free hosted by us at [NVIDIA AI Playground](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/llama2-70b-steerlm). You can use this in the provided UI or through a limited access API (up to 10, 000 requests within 30 days). If you would need more requests, we demonstrate how you can set up an inference server at [Llama2-70B-SteerLM-Chat model page on HF](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat) You can also train a model using [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner) following [SteerLM training user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html). <img src="https://huggingface.co/datasets/nvidia/HelpSteer/resolve/main/mtbench_categories.png" alt="MT Bench Categories" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> HelpSteer Paper : [HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM](http://arxiv.org/abs/2311.09528) SteerLM Paper: [SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF](https://arxiv.org/abs/2310.05344) ## Dataset Description HelpSteer contains 37, 120 samples, each containing a prompt, a response as well as five human-annotated attributes of the response, each ranging between 0 and 4 where higher means better for each attribute. These attributes are: 1. **Helpfulness**: Overall helpfulness of the response to the prompt. 2. **Correctness**: Inclusion of all pertinent facts without errors. 3. **Coherence**: Consistency and clarity of expression. 4. **Complexity**: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise). 5. **Verbosity**: Amount of detail included in the response, relative to what is asked for in the prompt. ## Example ```json { "prompt": "What are the three most important things to consider when deciding what technology to use to build an assist device to help an elderly person with basic needs?", "response": "To build an assistive device to help an elderly person with basic needs, one must consider three crucial things: safety, compatibility, and ease of use. Safety is paramount, as the device must not cause harm to the user. Compatibility with the user's environment and other devices is also essential. Finally, the device must be simple enough for the elderly person to operate.", "helpfulness": 3, "correctness": 4, "coherence": 4, "complexity": 2, "verbosity": 2 } ``` ## Using the Huggingface Datasets ```python from datasets import load_dataset ds = load_dataset("nvidia/HelpSteer") train = ds['train'] # len(train) = 35331 (95%) val = ds['validation'] # len(val) = 1789 (5%) ``` ## Source 1. Prompts are collected based on a mixture of template-generated (mainly for prompt involving long reference text) and human generated by Scale AI. These prompts relate to the tasks of Rewrite, Summarization, Classification, Extraction, Closed Question Answering, Open Question Answering, Generation and Brainstorming. 2. Responses are generated by an early version of an inhouse LLM. We generate up to 4 responses per prompts using sample techniques to give diverse yet reasonable responses. 3. Annotations of various attributes were done by Scale AI. Annotators rated each response on a Likert 5 scale (between 0 and 4) for each attribute (helpfulness, correctness, coherence, complexity and verbosity). ## Annotation methodology (short) 1. We engaged a select group of contractors via Scale AI. These contractors were provided with comprehensive guidelines that defined each attribute and the criteria for every rating level, together with some annotated examples. These guidelines and examples are detailed in the Appendix of the accompanying paper. 2. The annotation process involved approximately 200 U.S.-based human annotators. Candidates first underwent preliminary assignments, including assessments of English proficiency, to determine eligibility for working on the project. Subsequently, they participated in an introductory training course on the task which ended with a test that involved annotating 35 sample responses. This process ensured not only a thorough understanding of the task requirements but also the delivery of high-quality annotations. 3. Post-annotations, Scale AI performed extensive quality assurance, with each annotation reaching a minimum of two human reviews in addition to automated checks. After receiving the annotations from Scale AI, we conducted our independent quality assurance to make sure that the quality of the annotations was up to our expectations. As a result, some annotations were filtered away to retain only 37, 120 samples. ## Ethical statement Annotators for the dataset were contracted through Scale AI. Scale AI engages the Anker Methodology, GISC Impact Sourcing Standard, and UN Sustainable Development Goals to provide a fair and competitive pay. The specific pay is calculated based on many factors, including the specific project, the specialized skillset and expertise required, regional costs of living and then transparently listed on Scale AI platform. Scale AI also provides multiple channels for questions and support, including 24/7 support teams, community discussion channels with specially trained moderators, and a “speak up” hotline where contractors can report concerns anonymously. Worker concerns can be submitted to and are reviewed by our Remotasks support team, and pay disputes are reviewed by support specialists trained in this area. ## Contact E-Mail: [Zhilin Wang](mailto:[email protected]) ## Citation If you find this dataset useful, please cite the following works ```bibtex @misc{wang2023helpsteer, title={HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM}, author={Zhilin Wang and Yi Dong and Jiaqi Zeng and Virginia Adams and Makesh Narsimhan Sreedhar and Daniel Egert and Olivier Delalleau and Jane Polak Scowcroft and Neel Kant and Aidan Swope and Oleksii Kuchaiev}, year={2023}, eprint={2311.09528}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{dong2023steerlm, title={SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF}, author={Yi Dong and Zhilin Wang and Makesh Narsimhan Sreedhar and Xianchao Wu and Oleksii Kuchaiev}, year={2023}, eprint={2310.05344}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
nvidia/HelpSteer
[ "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "human-feedback", "arxiv:2311.09528", "arxiv:2310.05344", "region:us" ]
2023-11-15T20:55:27+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "pretty_name": "Helpfulness SteerLM Dataset", "tags": ["human-feedback"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "helpfulness", "dtype": "int32"}, {"name": "correctness", "dtype": "int32"}, {"name": "coherence", "dtype": "int32"}, {"name": "complexity", "dtype": "int32"}, {"name": "verbosity", "dtype": "int32"}], "splits": [{"name": "train", "num_examples": 35331}, {"name": "validation", "num_examples": 1789}]}}
2024-01-04T03:55:31+00:00
[ "2311.09528", "2310.05344" ]
[ "en" ]
TAGS #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #human-feedback #arxiv-2311.09528 #arxiv-2310.05344 #region-us
# HelpSteer: Helpfulness SteerLM Dataset HelpSteer is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses. Leveraging this dataset and SteerLM, we train a Llama 2 70B to reach 7.54 on MT Bench, the highest among models trained on open-source datasets based on MT Bench Leaderboard as of 15 Nov 2023. This model is available on HF at Llama2-70B-SteerLM-Chat. Try this model instantly for free hosted by us at NVIDIA AI Playground. You can use this in the provided UI or through a limited access API (up to 10, 000 requests within 30 days). If you would need more requests, we demonstrate how you can set up an inference server at Llama2-70B-SteerLM-Chat model page on HF You can also train a model using NeMo Aligner following SteerLM training user guide. <img src="URL alt="MT Bench Categories" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> HelpSteer Paper : HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM SteerLM Paper: SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF ## Dataset Description HelpSteer contains 37, 120 samples, each containing a prompt, a response as well as five human-annotated attributes of the response, each ranging between 0 and 4 where higher means better for each attribute. These attributes are: 1. Helpfulness: Overall helpfulness of the response to the prompt. 2. Correctness: Inclusion of all pertinent facts without errors. 3. Coherence: Consistency and clarity of expression. 4. Complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise). 5. Verbosity: Amount of detail included in the response, relative to what is asked for in the prompt. ## Example ## Using the Huggingface Datasets ## Source 1. Prompts are collected based on a mixture of template-generated (mainly for prompt involving long reference text) and human generated by Scale AI. These prompts relate to the tasks of Rewrite, Summarization, Classification, Extraction, Closed Question Answering, Open Question Answering, Generation and Brainstorming. 2. Responses are generated by an early version of an inhouse LLM. We generate up to 4 responses per prompts using sample techniques to give diverse yet reasonable responses. 3. Annotations of various attributes were done by Scale AI. Annotators rated each response on a Likert 5 scale (between 0 and 4) for each attribute (helpfulness, correctness, coherence, complexity and verbosity). ## Annotation methodology (short) 1. We engaged a select group of contractors via Scale AI. These contractors were provided with comprehensive guidelines that defined each attribute and the criteria for every rating level, together with some annotated examples. These guidelines and examples are detailed in the Appendix of the accompanying paper. 2. The annotation process involved approximately 200 U.S.-based human annotators. Candidates first underwent preliminary assignments, including assessments of English proficiency, to determine eligibility for working on the project. Subsequently, they participated in an introductory training course on the task which ended with a test that involved annotating 35 sample responses. This process ensured not only a thorough understanding of the task requirements but also the delivery of high-quality annotations. 3. Post-annotations, Scale AI performed extensive quality assurance, with each annotation reaching a minimum of two human reviews in addition to automated checks. After receiving the annotations from Scale AI, we conducted our independent quality assurance to make sure that the quality of the annotations was up to our expectations. As a result, some annotations were filtered away to retain only 37, 120 samples. ## Ethical statement Annotators for the dataset were contracted through Scale AI. Scale AI engages the Anker Methodology, GISC Impact Sourcing Standard, and UN Sustainable Development Goals to provide a fair and competitive pay. The specific pay is calculated based on many factors, including the specific project, the specialized skillset and expertise required, regional costs of living and then transparently listed on Scale AI platform. Scale AI also provides multiple channels for questions and support, including 24/7 support teams, community discussion channels with specially trained moderators, and a “speak up” hotline where contractors can report concerns anonymously. Worker concerns can be submitted to and are reviewed by our Remotasks support team, and pay disputes are reviewed by support specialists trained in this area. ## Contact E-Mail: Zhilin Wang If you find this dataset useful, please cite the following works
[ "# HelpSteer: Helpfulness SteerLM Dataset\n\n\nHelpSteer is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses.\n\nLeveraging this dataset and SteerLM, we train a Llama 2 70B to reach 7.54 on MT Bench, the highest among models trained on open-source datasets based on MT Bench Leaderboard as of 15 Nov 2023.\nThis model is available on HF at Llama2-70B-SteerLM-Chat.\n\nTry this model instantly for free hosted by us at NVIDIA AI Playground. You can use this in the provided UI or through a limited access API (up to 10, 000 requests within 30 days). \nIf you would need more requests, we demonstrate how you can set up an inference server at Llama2-70B-SteerLM-Chat model page on HF\n\nYou can also train a model using NeMo Aligner following SteerLM training user guide.\n\n<img src=\"URL alt=\"MT Bench Categories\" width=\"800\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nHelpSteer Paper : HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM\n\nSteerLM Paper: SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF", "## Dataset Description\n\nHelpSteer contains 37, 120 samples, each containing a prompt, a response as well as five human-annotated attributes of the response, each ranging between 0 and 4 where higher means better for each attribute.\n\nThese attributes are:\n\n1. Helpfulness: Overall helpfulness of the response to the prompt.\n2. Correctness: Inclusion of all pertinent facts without errors. \n3. Coherence: Consistency and clarity of expression. \n4. Complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).\n5. Verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.", "## Example", "## Using the Huggingface Datasets", "## Source\n\n1. Prompts are collected based on a mixture of template-generated (mainly for prompt involving long reference text) and human generated by Scale AI. These prompts relate to the tasks of Rewrite, Summarization, Classification, Extraction, Closed Question Answering, Open Question Answering, Generation and Brainstorming.\n2. Responses are generated by an early version of an inhouse LLM. We generate up to 4 responses per prompts using sample techniques to give diverse yet reasonable responses.\n3. Annotations of various attributes were done by Scale AI. Annotators rated each response on a Likert 5 scale (between 0 and 4) for each attribute (helpfulness, correctness, coherence, complexity and verbosity).", "## Annotation methodology (short)\t\n\n1. We engaged a select group of contractors via Scale AI. These contractors were provided with comprehensive guidelines that defined each attribute and the criteria for every rating level, together with some annotated examples. These guidelines and examples are detailed in the Appendix of the accompanying paper.\n2. The annotation process involved approximately 200 U.S.-based human annotators. Candidates first underwent preliminary assignments, including assessments of English proficiency, to determine eligibility for working on the project. Subsequently, they participated in an introductory training course on the task which ended with a test that involved annotating 35 sample responses. This process ensured not only a thorough understanding of the task requirements but also the delivery of high-quality annotations.\n3. Post-annotations, Scale AI performed extensive quality assurance, with each annotation reaching a minimum of two human reviews in addition to automated checks. After receiving the annotations from Scale AI, we conducted our independent quality assurance to make sure that the quality of the annotations was up to our expectations. As a result, some annotations were filtered away to retain only 37, 120 samples.", "## Ethical statement\t\nAnnotators for the dataset were contracted through Scale AI. Scale AI engages the Anker Methodology, GISC Impact Sourcing Standard, and UN Sustainable Development Goals to provide a fair and competitive pay. The specific pay is calculated based on many factors, including the specific project, the specialized skillset and expertise required, regional costs of living and then transparently listed on Scale AI platform. Scale AI also provides multiple channels for questions and support, including 24/7 support teams, community discussion channels with specially trained moderators, and a “speak up” hotline where contractors can report concerns anonymously. Worker concerns can be submitted to and are reviewed by our Remotasks support team, and pay disputes are reviewed by support specialists trained in this area.", "## Contact\n\nE-Mail: Zhilin Wang\n\nIf you find this dataset useful, please cite the following works" ]
[ "TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #human-feedback #arxiv-2311.09528 #arxiv-2310.05344 #region-us \n", "# HelpSteer: Helpfulness SteerLM Dataset\n\n\nHelpSteer is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses.\n\nLeveraging this dataset and SteerLM, we train a Llama 2 70B to reach 7.54 on MT Bench, the highest among models trained on open-source datasets based on MT Bench Leaderboard as of 15 Nov 2023.\nThis model is available on HF at Llama2-70B-SteerLM-Chat.\n\nTry this model instantly for free hosted by us at NVIDIA AI Playground. You can use this in the provided UI or through a limited access API (up to 10, 000 requests within 30 days). \nIf you would need more requests, we demonstrate how you can set up an inference server at Llama2-70B-SteerLM-Chat model page on HF\n\nYou can also train a model using NeMo Aligner following SteerLM training user guide.\n\n<img src=\"URL alt=\"MT Bench Categories\" width=\"800\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nHelpSteer Paper : HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM\n\nSteerLM Paper: SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF", "## Dataset Description\n\nHelpSteer contains 37, 120 samples, each containing a prompt, a response as well as five human-annotated attributes of the response, each ranging between 0 and 4 where higher means better for each attribute.\n\nThese attributes are:\n\n1. Helpfulness: Overall helpfulness of the response to the prompt.\n2. Correctness: Inclusion of all pertinent facts without errors. \n3. Coherence: Consistency and clarity of expression. \n4. Complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).\n5. Verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.", "## Example", "## Using the Huggingface Datasets", "## Source\n\n1. Prompts are collected based on a mixture of template-generated (mainly for prompt involving long reference text) and human generated by Scale AI. These prompts relate to the tasks of Rewrite, Summarization, Classification, Extraction, Closed Question Answering, Open Question Answering, Generation and Brainstorming.\n2. Responses are generated by an early version of an inhouse LLM. We generate up to 4 responses per prompts using sample techniques to give diverse yet reasonable responses.\n3. Annotations of various attributes were done by Scale AI. Annotators rated each response on a Likert 5 scale (between 0 and 4) for each attribute (helpfulness, correctness, coherence, complexity and verbosity).", "## Annotation methodology (short)\t\n\n1. We engaged a select group of contractors via Scale AI. These contractors were provided with comprehensive guidelines that defined each attribute and the criteria for every rating level, together with some annotated examples. These guidelines and examples are detailed in the Appendix of the accompanying paper.\n2. The annotation process involved approximately 200 U.S.-based human annotators. Candidates first underwent preliminary assignments, including assessments of English proficiency, to determine eligibility for working on the project. Subsequently, they participated in an introductory training course on the task which ended with a test that involved annotating 35 sample responses. This process ensured not only a thorough understanding of the task requirements but also the delivery of high-quality annotations.\n3. Post-annotations, Scale AI performed extensive quality assurance, with each annotation reaching a minimum of two human reviews in addition to automated checks. After receiving the annotations from Scale AI, we conducted our independent quality assurance to make sure that the quality of the annotations was up to our expectations. As a result, some annotations were filtered away to retain only 37, 120 samples.", "## Ethical statement\t\nAnnotators for the dataset were contracted through Scale AI. Scale AI engages the Anker Methodology, GISC Impact Sourcing Standard, and UN Sustainable Development Goals to provide a fair and competitive pay. The specific pay is calculated based on many factors, including the specific project, the specialized skillset and expertise required, regional costs of living and then transparently listed on Scale AI platform. Scale AI also provides multiple channels for questions and support, including 24/7 support teams, community discussion channels with specially trained moderators, and a “speak up” hotline where contractors can report concerns anonymously. Worker concerns can be submitted to and are reviewed by our Remotasks support team, and pay disputes are reviewed by support specialists trained in this area.", "## Contact\n\nE-Mail: Zhilin Wang\n\nIf you find this dataset useful, please cite the following works" ]
[ 54, 339, 166, 3, 10, 178, 278, 182, 23 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #human-feedback #arxiv-2311.09528 #arxiv-2310.05344 #region-us \n# HelpSteer: Helpfulness SteerLM Dataset\n\n\nHelpSteer is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses.\n\nLeveraging this dataset and SteerLM, we train a Llama 2 70B to reach 7.54 on MT Bench, the highest among models trained on open-source datasets based on MT Bench Leaderboard as of 15 Nov 2023.\nThis model is available on HF at Llama2-70B-SteerLM-Chat.\n\nTry this model instantly for free hosted by us at NVIDIA AI Playground. You can use this in the provided UI or through a limited access API (up to 10, 000 requests within 30 days). \nIf you would need more requests, we demonstrate how you can set up an inference server at Llama2-70B-SteerLM-Chat model page on HF\n\nYou can also train a model using NeMo Aligner following SteerLM training user guide.\n\n<img src=\"URL alt=\"MT Bench Categories\" width=\"800\" style=\"margin-left:'auto' margin-right:'auto' display:'block'\"/>\n\n\nHelpSteer Paper : HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM\n\nSteerLM Paper: SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF", "passage: ## Dataset Description\n\nHelpSteer contains 37, 120 samples, each containing a prompt, a response as well as five human-annotated attributes of the response, each ranging between 0 and 4 where higher means better for each attribute.\n\nThese attributes are:\n\n1. Helpfulness: Overall helpfulness of the response to the prompt.\n2. Correctness: Inclusion of all pertinent facts without errors. \n3. Coherence: Consistency and clarity of expression. \n4. Complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).\n5. Verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.## Example## Using the Huggingface Datasets## Source\n\n1. Prompts are collected based on a mixture of template-generated (mainly for prompt involving long reference text) and human generated by Scale AI. These prompts relate to the tasks of Rewrite, Summarization, Classification, Extraction, Closed Question Answering, Open Question Answering, Generation and Brainstorming.\n2. Responses are generated by an early version of an inhouse LLM. We generate up to 4 responses per prompts using sample techniques to give diverse yet reasonable responses.\n3. Annotations of various attributes were done by Scale AI. Annotators rated each response on a Likert 5 scale (between 0 and 4) for each attribute (helpfulness, correctness, coherence, complexity and verbosity).## Annotation methodology (short)\t\n\n1. We engaged a select group of contractors via Scale AI. These contractors were provided with comprehensive guidelines that defined each attribute and the criteria for every rating level, together with some annotated examples. These guidelines and examples are detailed in the Appendix of the accompanying paper.\n2. The annotation process involved approximately 200 U.S.-based human annotators. Candidates first underwent preliminary assignments, including assessments of English proficiency, to determine eligibility for working on the project. Subsequently, they participated in an introductory training course on the task which ended with a test that involved annotating 35 sample responses. This process ensured not only a thorough understanding of the task requirements but also the delivery of high-quality annotations.\n3. Post-annotations, Scale AI performed extensive quality assurance, with each annotation reaching a minimum of two human reviews in addition to automated checks. After receiving the annotations from Scale AI, we conducted our independent quality assurance to make sure that the quality of the annotations was up to our expectations. As a result, some annotations were filtered away to retain only 37, 120 samples." ]
dceff558136dd62c66eda8f1caa455b3bc1fae81
# Dataset Card for "java_ds2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shyamsubbu/java_ds2
[ "region:us" ]
2023-11-15T21:10:36+00:00
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 162634772, "num_examples": 30000}], "download_size": 0, "dataset_size": 162634772}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-16T00:18:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "java_ds2" More Information needed
[ "# Dataset Card for \"java_ds2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"java_ds2\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"java_ds2\"\n\nMore Information needed" ]
99a7f292c37474cfec4543d9496b546f284cbac8
# Dataset Card for "ha-en_RL-grow2_I2_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pranjali97/ha-en_RL-grow2_I2_train
[ "region:us" ]
2023-11-15T21:37:07+00:00
{"dataset_info": {"features": [{"name": "src", "dtype": "string"}, {"name": "ref", "dtype": "string"}, {"name": "mt", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 12523140, "num_examples": 29454}], "download_size": 3280720, "dataset_size": 12523140}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T21:37:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ha-en_RL-grow2_I2_train" More Information needed
[ "# Dataset Card for \"ha-en_RL-grow2_I2_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ha-en_RL-grow2_I2_train\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ha-en_RL-grow2_I2_train\"\n\nMore Information needed" ]
e4f47f9798fbeb70895b0a216887dc9664d7e621
# Dataset Card for "SemCor – sense-tagged English corpus" ## Description This dataset is derived from the [wsd_semcor dataset](https://huggingface.co/datasets/spdenisov/wsd_semcor), originally hosted on Hugging Face. It has been preprocessed for tasks related to Word Sense Disambiguation (WSD) and WordNet integration. ## Preprocessing The original text data underwent the following preprocessing steps: - Text splitting into individual words (lemmas). - TF-IDF (Term Frequency-Inverse Document Frequency) analysis to understand the importance of words within the documents. ## Structure The dataset contains: - Lemmas: Words obtained from splitting the text data. - TF-IDF values: Quantitative measures of word importance within the documents. ## Note The number of elements in **Lemma** and **Lemma_tfidf** might not match. This is because **Lemma** is based on original dataset and might contain compound words, which might not be recognized by TF-IDF algorithm. ## Intended Use This dataset is intended for use in WSD and WordNet integration tasks. It provides foundational data for natural language processing (NLP) research and applications, specifically focusing on understanding word meanings and contextual usage. ## Citation Data sourced from [wsd_semcor dataset](https://huggingface.co/datasets/spdenisov/wsd_semcor) on Hugging Face. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MarkChen1214/SemCor
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
2023-11-15T21:42:43+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "ID", "sequence": "int64"}, {"name": "Word", "sequence": "string"}, {"name": "Lemma", "sequence": "string"}, {"name": "POS", "sequence": "string"}, {"name": "Definition", "sequence": "string"}, {"name": "Lemma_sentence", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "Lemma_tfidf", "sequence": "string"}, {"name": "Lemma_tfidf_value", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 24209901, "num_examples": 20138}], "download_size": 8568417, "dataset_size": 24209901}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T22:10:42+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #region-us
# Dataset Card for "SemCor – sense-tagged English corpus" ## Description This dataset is derived from the wsd_semcor dataset, originally hosted on Hugging Face. It has been preprocessed for tasks related to Word Sense Disambiguation (WSD) and WordNet integration. ## Preprocessing The original text data underwent the following preprocessing steps: - Text splitting into individual words (lemmas). - TF-IDF (Term Frequency-Inverse Document Frequency) analysis to understand the importance of words within the documents. ## Structure The dataset contains: - Lemmas: Words obtained from splitting the text data. - TF-IDF values: Quantitative measures of word importance within the documents. ## Note The number of elements in Lemma and Lemma_tfidf might not match. This is because Lemma is based on original dataset and might contain compound words, which might not be recognized by TF-IDF algorithm. ## Intended Use This dataset is intended for use in WSD and WordNet integration tasks. It provides foundational data for natural language processing (NLP) research and applications, specifically focusing on understanding word meanings and contextual usage. Data sourced from wsd_semcor dataset on Hugging Face. More Information needed
[ "# Dataset Card for \"SemCor – sense-tagged English corpus\"", "## Description\nThis dataset is derived from the wsd_semcor dataset, originally hosted on Hugging Face. It has been preprocessed for tasks related to Word Sense Disambiguation (WSD) and WordNet integration.", "## Preprocessing\nThe original text data underwent the following preprocessing steps:\n- Text splitting into individual words (lemmas).\n- TF-IDF (Term Frequency-Inverse Document Frequency) analysis to understand the importance of words within the documents.", "## Structure\nThe dataset contains:\n- Lemmas: Words obtained from splitting the text data.\n- TF-IDF values: Quantitative measures of word importance within the documents.", "## Note\nThe number of elements in Lemma and Lemma_tfidf might not match. This is because Lemma is based on original dataset and might contain compound words, which might not be recognized by TF-IDF algorithm.", "## Intended Use\nThis dataset is intended for use in WSD and WordNet integration tasks. It provides foundational data for natural language processing (NLP) research and applications, specifically focusing on understanding word meanings and contextual usage.\n\nData sourced from wsd_semcor dataset on Hugging Face.\n\nMore Information needed" ]
[ "TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #region-us \n", "# Dataset Card for \"SemCor – sense-tagged English corpus\"", "## Description\nThis dataset is derived from the wsd_semcor dataset, originally hosted on Hugging Face. It has been preprocessed for tasks related to Word Sense Disambiguation (WSD) and WordNet integration.", "## Preprocessing\nThe original text data underwent the following preprocessing steps:\n- Text splitting into individual words (lemmas).\n- TF-IDF (Term Frequency-Inverse Document Frequency) analysis to understand the importance of words within the documents.", "## Structure\nThe dataset contains:\n- Lemmas: Words obtained from splitting the text data.\n- TF-IDF values: Quantitative measures of word importance within the documents.", "## Note\nThe number of elements in Lemma and Lemma_tfidf might not match. This is because Lemma is based on original dataset and might contain compound words, which might not be recognized by TF-IDF algorithm.", "## Intended Use\nThis dataset is intended for use in WSD and WordNet integration tasks. It provides foundational data for natural language processing (NLP) research and applications, specifically focusing on understanding word meanings and contextual usage.\n\nData sourced from wsd_semcor dataset on Hugging Face.\n\nMore Information needed" ]
[ 38, 17, 54, 59, 45, 52, 73 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #region-us \n# Dataset Card for \"SemCor – sense-tagged English corpus\"## Description\nThis dataset is derived from the wsd_semcor dataset, originally hosted on Hugging Face. It has been preprocessed for tasks related to Word Sense Disambiguation (WSD) and WordNet integration.## Preprocessing\nThe original text data underwent the following preprocessing steps:\n- Text splitting into individual words (lemmas).\n- TF-IDF (Term Frequency-Inverse Document Frequency) analysis to understand the importance of words within the documents.## Structure\nThe dataset contains:\n- Lemmas: Words obtained from splitting the text data.\n- TF-IDF values: Quantitative measures of word importance within the documents.## Note\nThe number of elements in Lemma and Lemma_tfidf might not match. This is because Lemma is based on original dataset and might contain compound words, which might not be recognized by TF-IDF algorithm.## Intended Use\nThis dataset is intended for use in WSD and WordNet integration tasks. It provides foundational data for natural language processing (NLP) research and applications, specifically focusing on understanding word meanings and contextual usage.\n\nData sourced from wsd_semcor dataset on Hugging Face.\n\nMore Information needed" ]
65dc8aca3a19025536c672bae7eef38efd2f7887
# Dataset Card for "slim_pajama_chunk_4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rokset3/slim_pajama_chunk_4
[ "region:us" ]
2023-11-15T21:47:03+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 259290406066, "num_examples": 59040311}], "download_size": 150847645805, "dataset_size": 259290406066}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T23:16:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "slim_pajama_chunk_4" More Information needed
[ "# Dataset Card for \"slim_pajama_chunk_4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"slim_pajama_chunk_4\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"slim_pajama_chunk_4\"\n\nMore Information needed" ]
97eeda124f1f90fda8cf868d6fe6a8fb6da92b89
# Dataset Card for "IDTreeS" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
danielz01/IDTreeS
[ "region:us" ]
2023-11-15T22:27:50+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": {"sequence": "int64"}}}, {"name": "categories", "sequence": "int64"}]}, {"name": "taxonomy_id", "sequence": "string"}, {"name": "scientific_name", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 8101528.0, "num_examples": 82}], "download_size": 7999668, "dataset_size": 8101528.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T22:31:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "IDTreeS" More Information needed
[ "# Dataset Card for \"IDTreeS\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"IDTreeS\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"IDTreeS\"\n\nMore Information needed" ]
e85b440dc695e2f5f39e59d15dac739ae4be7194
# Dataset Card for "imdb-reviews-pt-br" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
celsowm/imdb-reviews-pt-br
[ "region:us" ]
2023-11-15T22:38:09+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "texto", "dtype": "string"}, {"name": "sentimento", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 65805332, "num_examples": 49459}], "download_size": 41015476, "dataset_size": 65805332}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T22:39:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "imdb-reviews-pt-br" More Information needed
[ "# Dataset Card for \"imdb-reviews-pt-br\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"imdb-reviews-pt-br\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"imdb-reviews-pt-br\"\n\nMore Information needed" ]
e4195a3b8777f9afe2953a4edd731f168e11a1d6
# Dataset Card for "neon-trees" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
danielz01/neon-trees
[ "region:us" ]
2023-11-15T22:59:29+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "path", "dtype": "string"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float64"}}, {"name": "categories", "sequence": "string"}]}, {"name": "count", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 659642403.0, "num_examples": 20}, {"name": "evaluation", "num_bytes": 108197378.0, "num_examples": 194}], "download_size": 766366868, "dataset_size": 767839781.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "evaluation", "path": "data/evaluation-*"}]}]}
2023-11-15T23:00:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "neon-trees" More Information needed
[ "# Dataset Card for \"neon-trees\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"neon-trees\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"neon-trees\"\n\nMore Information needed" ]
e63747aafc5de8fe6716e74dfa98cb8d223d2fcd
# Dataset Card for "arxiv-qa-astro-ph" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
universeTBD/arxiv-qa-astro-ph
[ "region:us" ]
2023-11-15T23:22:48+00:00
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4108026, "num_examples": 10356}], "download_size": 2402562, "dataset_size": 4108026}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-15T23:22:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "arxiv-qa-astro-ph" More Information needed
[ "# Dataset Card for \"arxiv-qa-astro-ph\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"arxiv-qa-astro-ph\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"arxiv-qa-astro-ph\"\n\nMore Information needed" ]
c627f820e5436ef8dbee102d22142161afde6e63
# Dataset Card for "truthfulqa_randomized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
brettbbb/truthfulqa_randomized
[ "region:us" ]
2023-11-15T23:22:56+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1028813.0, "num_examples": 653}, {"name": "test", "num_bytes": 235300.0, "num_examples": 164}], "download_size": 252895, "dataset_size": 1264113.0}}
2023-11-15T23:23:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "truthfulqa_randomized" More Information needed
[ "# Dataset Card for \"truthfulqa_randomized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"truthfulqa_randomized\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"truthfulqa_randomized\"\n\nMore Information needed" ]
1835026b8d38cdf918edd15b470c9cc92d5a91db
# Dataset Card for "VMware-open-instruct-higgsfield" Just a port of the [VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct) dataset to [Higgsfield's prompt-completion format](https://github.com/higgsfield-ai/higgsfield/blob/main/tutorials/prompt_completion.ipynb).
monology/VMware-open-instruct-higgsfield
[ "region:us" ]
2023-11-15T23:28:41+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 91035395, "num_examples": 142622}], "download_size": 42482187, "dataset_size": 91035395}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-20T19:39:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "VMware-open-instruct-higgsfield" Just a port of the VMware/open-instruct dataset to Higgsfield's prompt-completion format.
[ "# Dataset Card for \"VMware-open-instruct-higgsfield\"\n\nJust a port of the VMware/open-instruct dataset to Higgsfield's prompt-completion format." ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"VMware-open-instruct-higgsfield\"\n\nJust a port of the VMware/open-instruct dataset to Higgsfield's prompt-completion format." ]
[ 6, 47 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"VMware-open-instruct-higgsfield\"\n\nJust a port of the VMware/open-instruct dataset to Higgsfield's prompt-completion format." ]
66976707105fd1dd0680853534a8898df7797114
# Dataset Card for "mrpc_llama_2_v4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
VitaliiVrublevskyi/mrpc_llama_2_v4
[ "region:us" ]
2023-11-15T23:30:16+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1841768, "num_examples": 7336}, {"name": "validation", "num_bytes": 109143, "num_examples": 408}, {"name": "test", "num_bytes": 456210, "num_examples": 1725}], "download_size": 1198316, "dataset_size": 2407121}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-11-15T23:30:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mrpc_llama_2_v4" More Information needed
[ "# Dataset Card for \"mrpc_llama_2_v4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mrpc_llama_2_v4\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mrpc_llama_2_v4\"\n\nMore Information needed" ]
1bf0d894ad9177b980b3b409c34de40b405f7792
# InstruSum This is the dataset corresponding to our paper ["Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization"](https://arxiv.org/abs/2311.09184). ### dataset The `dataset` subset contains 100 human-written data examples by us. Each example contains an article, a summary instruction, a LLM-generated summary, and a hybrid LLM-human summary. ### human_eval This subset contains human evaluation results for the 100 examples in the `dataset` subset. There are 5 systems evaluated: OpenAI's `text-davinci-002`, `text-davinci-003`, `gpt-3.5-turbo-0301`, `gpt-4-0314`, along with the `hybrid` LLM-human summary. We evaluated 4 evaluation aspects: - **Overall Quality**: This rating assesses the overall quality of the summary in relation to the summary requirement. - **Missing Information**: Does the summary omit any crucial information from the article concerning the summary requirement? - **Irrelevant Information**: Does the summary include any information that is not relevant to the summary requirement? - **Factual Consistency**: Is the summary consistent with the facts presented in the article, without contradicting or misrepresenting any information? ### llm_eval This subset contains LLM-based automatic evaluation results for the 100 examples in the `dataset` subset. We used 11 LLMs in our evaluation and 4 evaluation protocols: - `LLMRank`: listwise ranking - `LLMCompare`: pairwise comparison - `LLMEval`: pointwise scoring by text completion - `LLMScore`: pointwise scoring by model-predicted log-likelihood In total, we evaluated 40 LLM-based evaluation methods over three quality aspects: | LLM | LLMRank | LLMCompare | LLMEval | LLMScore | |--------------------------|---------|------------|---------|----------| | `text-davinci-002` | ✅ | ✅ | ✅ | ✅ | | `text-davinci-003` | ✅ | ✅ | ✅ | ✅ | | `gpt-3.5-turbo-0301` | ✅ | ✅ | ✅ | ❌ | | `gpt-3.5-turbo-0613` | ✅ | ✅ | ✅ | ❌ | | `gpt-3.5-turbo-instruct` | ✅ | ✅ | ✅ | ✅ | | `gpt-4-0314` | ✅ | ✅ | ✅ | ❌ | | `gpt-4-1106-preview` | ✅ | ✅ | ✅ | ❌ | | `llama-2-7b-chat` | ✅ | ✅ | ✅ | ✅ | | `llama-2-13b-chat` | ✅ | ✅ | ✅ | ✅ | | `llama-2-70b-chat` | ✅ | ✅ | ✅ | ✅ | | `mistral-instruct` | ✅ | ✅ | ✅ | ✅ | ### system_outputs This subset contains the system outputs for the 100 examples in the `dataset` subset over 11 LLMs (same as the `llm_eval` subset).
Salesforce/InstruSum
[ "license:bsd-3-clause", "arxiv:2311.09184", "region:us" ]
2023-11-15T23:51:38+00:00
{"license": "bsd-3-clause", "configs": [{"config_name": "dataset", "data_files": [{"split": "data", "path": "dataset.json"}]}, {"config_name": "human_eval", "data_files": [{"split": "data", "path": "human_eval.json"}]}, {"config_name": "llm_eval", "data_files": [{"split": "data", "path": "llm_eval.json"}]}, {"config_name": "system_outputs", "data_files": [{"split": "data", "path": "system_outputs.json"}]}]}
2023-11-16T02:55:57+00:00
[ "2311.09184" ]
[]
TAGS #license-bsd-3-clause #arxiv-2311.09184 #region-us
InstruSum ========= This is the dataset corresponding to our paper "Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization". ### dataset The 'dataset' subset contains 100 human-written data examples by us. Each example contains an article, a summary instruction, a LLM-generated summary, and a hybrid LLM-human summary. ### human\_eval This subset contains human evaluation results for the 100 examples in the 'dataset' subset. There are 5 systems evaluated: OpenAI's 'text-davinci-002', 'text-davinci-003', 'gpt-3.5-turbo-0301', 'gpt-4-0314', along with the 'hybrid' LLM-human summary. We evaluated 4 evaluation aspects: * Overall Quality: This rating assesses the overall quality of the summary in relation to the summary requirement. * Missing Information: Does the summary omit any crucial information from the article concerning the summary requirement? * Irrelevant Information: Does the summary include any information that is not relevant to the summary requirement? * Factual Consistency: Is the summary consistent with the facts presented in the article, without contradicting or misrepresenting any information? ### llm\_eval This subset contains LLM-based automatic evaluation results for the 100 examples in the 'dataset' subset. We used 11 LLMs in our evaluation and 4 evaluation protocols: * 'LLMRank': listwise ranking * 'LLMCompare': pairwise comparison * 'LLMEval': pointwise scoring by text completion * 'LLMScore': pointwise scoring by model-predicted log-likelihood In total, we evaluated 40 LLM-based evaluation methods over three quality aspects: ### system\_outputs This subset contains the system outputs for the 100 examples in the 'dataset' subset over 11 LLMs (same as the 'llm\_eval' subset).
[ "### dataset\n\n\nThe 'dataset' subset contains 100 human-written data examples by us.\nEach example contains an article, a summary instruction, a LLM-generated summary, and a hybrid LLM-human summary.", "### human\\_eval\n\n\nThis subset contains human evaluation results for the 100 examples in the 'dataset' subset.\nThere are 5 systems evaluated: OpenAI's 'text-davinci-002', 'text-davinci-003', 'gpt-3.5-turbo-0301', 'gpt-4-0314', along with the 'hybrid' LLM-human summary.\nWe evaluated 4 evaluation aspects:\n\n\n* Overall Quality: This rating assesses the overall quality of the summary in relation to the summary requirement.\n* Missing Information: Does the summary omit any crucial information from the article concerning the summary requirement?\n* Irrelevant Information: Does the summary include any information that is not relevant to the summary requirement?\n* Factual Consistency: Is the summary consistent with the facts presented in the article, without contradicting or misrepresenting any information?", "### llm\\_eval\n\n\nThis subset contains LLM-based automatic evaluation results for the 100 examples in the 'dataset' subset.\n\n\nWe used 11 LLMs in our evaluation and 4 evaluation protocols:\n\n\n* 'LLMRank': listwise ranking\n* 'LLMCompare': pairwise comparison\n* 'LLMEval': pointwise scoring by text completion\n* 'LLMScore': pointwise scoring by model-predicted log-likelihood\n\n\nIn total, we evaluated 40 LLM-based evaluation methods over three quality aspects:", "### system\\_outputs\n\n\nThis subset contains the system outputs for the 100 examples in the 'dataset' subset over 11 LLMs (same as the 'llm\\_eval' subset)." ]
[ "TAGS\n#license-bsd-3-clause #arxiv-2311.09184 #region-us \n", "### dataset\n\n\nThe 'dataset' subset contains 100 human-written data examples by us.\nEach example contains an article, a summary instruction, a LLM-generated summary, and a hybrid LLM-human summary.", "### human\\_eval\n\n\nThis subset contains human evaluation results for the 100 examples in the 'dataset' subset.\nThere are 5 systems evaluated: OpenAI's 'text-davinci-002', 'text-davinci-003', 'gpt-3.5-turbo-0301', 'gpt-4-0314', along with the 'hybrid' LLM-human summary.\nWe evaluated 4 evaluation aspects:\n\n\n* Overall Quality: This rating assesses the overall quality of the summary in relation to the summary requirement.\n* Missing Information: Does the summary omit any crucial information from the article concerning the summary requirement?\n* Irrelevant Information: Does the summary include any information that is not relevant to the summary requirement?\n* Factual Consistency: Is the summary consistent with the facts presented in the article, without contradicting or misrepresenting any information?", "### llm\\_eval\n\n\nThis subset contains LLM-based automatic evaluation results for the 100 examples in the 'dataset' subset.\n\n\nWe used 11 LLMs in our evaluation and 4 evaluation protocols:\n\n\n* 'LLMRank': listwise ranking\n* 'LLMCompare': pairwise comparison\n* 'LLMEval': pointwise scoring by text completion\n* 'LLMScore': pointwise scoring by model-predicted log-likelihood\n\n\nIn total, we evaluated 40 LLM-based evaluation methods over three quality aspects:", "### system\\_outputs\n\n\nThis subset contains the system outputs for the 100 examples in the 'dataset' subset over 11 LLMs (same as the 'llm\\_eval' subset)." ]
[ 26, 53, 198, 126, 50 ]
[ "passage: TAGS\n#license-bsd-3-clause #arxiv-2311.09184 #region-us \n### dataset\n\n\nThe 'dataset' subset contains 100 human-written data examples by us.\nEach example contains an article, a summary instruction, a LLM-generated summary, and a hybrid LLM-human summary.### human\\_eval\n\n\nThis subset contains human evaluation results for the 100 examples in the 'dataset' subset.\nThere are 5 systems evaluated: OpenAI's 'text-davinci-002', 'text-davinci-003', 'gpt-3.5-turbo-0301', 'gpt-4-0314', along with the 'hybrid' LLM-human summary.\nWe evaluated 4 evaluation aspects:\n\n\n* Overall Quality: This rating assesses the overall quality of the summary in relation to the summary requirement.\n* Missing Information: Does the summary omit any crucial information from the article concerning the summary requirement?\n* Irrelevant Information: Does the summary include any information that is not relevant to the summary requirement?\n* Factual Consistency: Is the summary consistent with the facts presented in the article, without contradicting or misrepresenting any information?### llm\\_eval\n\n\nThis subset contains LLM-based automatic evaluation results for the 100 examples in the 'dataset' subset.\n\n\nWe used 11 LLMs in our evaluation and 4 evaluation protocols:\n\n\n* 'LLMRank': listwise ranking\n* 'LLMCompare': pairwise comparison\n* 'LLMEval': pointwise scoring by text completion\n* 'LLMScore': pointwise scoring by model-predicted log-likelihood\n\n\nIn total, we evaluated 40 LLM-based evaluation methods over three quality aspects:### system\\_outputs\n\n\nThis subset contains the system outputs for the 100 examples in the 'dataset' subset over 11 LLMs (same as the 'llm\\_eval' subset)." ]
469ea0ea2902f6cd87900d7c6e4a0bf9d566b010
# Dataset Card for "mrpc_llama_2_v5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
VitaliiVrublevskyi/mrpc_llama_2_v5
[ "region:us" ]
2023-11-15T23:58:22+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 868581, "num_examples": 3668}, {"name": "validation", "num_bytes": 109143, "num_examples": 408}, {"name": "test", "num_bytes": 456210, "num_examples": 1725}], "download_size": 945647, "dataset_size": 1433934}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-11-15T23:58:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mrpc_llama_2_v5" More Information needed
[ "# Dataset Card for \"mrpc_llama_2_v5\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mrpc_llama_2_v5\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"mrpc_llama_2_v5\"\n\nMore Information needed" ]
a6502cb90945599895b0afc205fde27a51275129
## No Robots: Alpaca edition This dataset is a cleaned (missing/extra spaces...) and reformatted version of the [No Robots dataset](https://huggingface.co/datasets/HuggingFaceH4/no_robots) from HuggingFaceH4, adapted to conform with the Alpaca instruction set. Notably, it diverges from the original dataset in the way the 'Chat' category is handled; it has been decomposed into single-turn conversations to align with Alpaca's limitations regarding multi-turn interactions. The dataset's IDs have been generated using the SHA256 algorithm. Furthermore, only the categories 'Classify', 'Summarize', 'Rewrite', 'Extract', and 'Chat' include an '<b>Input</b>' field. ------------------------------------------- ## Original README # Dataset Card for No Robots 🙅‍♂️🤖 _Look Ma, an instruction dataset that wasn't generated by GPTs!_ ## Dataset Description - **Repository:** https://github.com/huggingface/alignment-handbook - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** Lewis Tunstall ### Dataset Summary No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's [InstructGPT paper](https://huggingface.co/papers/2203.02155), and is comprised mostly of single-turn instructions across the following categories: | Category | Count | |:-----------|--------:| | Generation | 4560 | | Open QA | 1240 | | Brainstorm | 1120 | | Chat | 850 | | Rewrite | 660 | | Summarize | 420 | | Coding | 350 | | Classify | 350 | | Closed QA | 260 | | Extract | 190 | ### Supported Tasks and Leaderboards The No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following: * [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench): a multi-turn benchmark spanning 80 dialogues and 10 domains. * [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval): a single-turn benchmark which evaluates the performance of chat and instruct models against `text-davinci-003`. Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in: * [Chatbot Arena](https://chat.lmsys.org): a live, human evaluation of chat models in head-to-head comparisons. ### Languages The data in No Robots are in English (BCP-47 en). ## Dataset Structure ### Data Instances An example of the `train_sft` or `test_sft` splits looks as follows: ``` {'prompt': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.', 'prompt_id': '2dc7ea89a2b6a2ed97d4eda07903162a801824261d3d3ae4dd2513db66fd79c8', 'messages': [{'content': 'Bunny is a chatbot that stutters, and acts timid and unsure of its answers.', 'role': 'system'}, {'content': 'When was the Libary of Alexandria burned down?', 'role': 'user'}, {'content': "Umm, I-I think that was in 48 BC, b-but I'm not sure, I'm sorry.", 'role': 'assistant'}, {'content': 'Who is the founder of Coca-Cola?', 'role': 'user'}, {'content': "D-don't quote me on this, but I- it might be John Pemberton.", 'role': 'assistant'}, {'content': "When did Loyle Carner's debut album come out, and what was its name?", 'role': 'user'}, {'content': "I-It could have b-been on the 20th January of 2017, and it might be called Yesterday's Gone, b-but I'm probably wrong.", 'role': 'assistant'}], 'category': 'Chat'} ``` ### Data Fields The data fields are as follows: * `prompt`: Describes the task the model should perform. * `prompt_id`: A unique ID for the prompt. * `messages`: An array of messages, where each message indicates the role (system, user, assistant) and the content. * `category`: Which category the example belongs to (e.g. `Chat` or `Coding`). ### Data Splits | | train_sft | test_sft | |---------------|------:| ---: | | no_robots | 9500 | 500 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{no_robots, author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf}, title = {No Robots}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}} } ```
AdamCodd/no_robots-alpaca
[ "task_categories:text-generation", "task_categories:conversational", "size_categories:10K<n<100K", "language:en", "license:cc-by-nc-4.0", "arxiv:2203.02155", "region:us" ]
2023-11-16T00:24:13+00:00
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational"], "pretty_name": "No Robots Alpaca"}
2023-11-16T00:40:47+00:00
[ "2203.02155" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-English #license-cc-by-nc-4.0 #arxiv-2203.02155 #region-us
No Robots: Alpaca edition ------------------------- This dataset is a cleaned (missing/extra spaces...) and reformatted version of the No Robots dataset from HuggingFaceH4, adapted to conform with the Alpaca instruction set. Notably, it diverges from the original dataset in the way the 'Chat' category is handled; it has been decomposed into single-turn conversations to align with Alpaca's limitations regarding multi-turn interactions. The dataset's IDs have been generated using the SHA256 algorithm. Furthermore, only the categories 'Classify', 'Summarize', 'Rewrite', 'Extract', and 'Chat' include an '**Input**' field. --- Original README --------------- Dataset Card for No Robots ‍️ ============================= *Look Ma, an instruction dataset that wasn't generated by GPTs!* Dataset Description ------------------- * Repository: URL * Paper: * Leaderboard: URL * Point of Contact: Lewis Tunstall ### Dataset Summary No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories: ### Supported Tasks and Leaderboards The No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following: * MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains. * AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'. Note that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in: * Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons. ### Languages The data in No Robots are in English (BCP-47 en). Dataset Structure ----------------- ### Data Instances An example of the 'train\_sft' or 'test\_sft' splits looks as follows: ### Data Fields The data fields are as follows: * 'prompt': Describes the task the model should perform. * 'prompt\_id': A unique ID for the prompt. * 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content. * 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding'). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
[ "### Dataset Summary\n\n\nNo Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories:", "### Supported Tasks and Leaderboards\n\n\nThe No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:\n\n\n* MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains.\n* AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'.\n\n\nNote that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:\n\n\n* Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons.", "### Languages\n\n\nThe data in No Robots are in English (BCP-47 en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of the 'train\\_sft' or 'test\\_sft' splits looks as follows:", "### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'prompt': Describes the task the model should perform.\n* 'prompt\\_id': A unique ID for the prompt.\n* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.\n* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ "TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-English #license-cc-by-nc-4.0 #arxiv-2203.02155 #region-us \n", "### Dataset Summary\n\n\nNo Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories:", "### Supported Tasks and Leaderboards\n\n\nThe No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:\n\n\n* MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains.\n* AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'.\n\n\nNote that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:\n\n\n* Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons.", "### Languages\n\n\nThe data in No Robots are in English (BCP-47 en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of the 'train\\_sft' or 'test\\_sft' splits looks as follows:", "### Data Fields\n\n\nThe data fields are as follows:\n\n\n* 'prompt': Describes the task the model should perform.\n* 'prompt\\_id': A unique ID for the prompt.\n* 'messages': An array of messages, where each message indicates the role (system, user, assistant) and the content.\n* 'category': Which category the example belongs to (e.g. 'Chat' or 'Coding').", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ 62, 97, 216, 26, 33, 107, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 26 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-10K<n<100K #language-English #license-cc-by-nc-4.0 #arxiv-2203.02155 #region-us \n### Dataset Summary\n\n\nNo Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn instructions across the following categories:### Supported Tasks and Leaderboards\n\n\nThe No Robots dataset designed for instruction fine-tuning pretrained language models and we recommend benchmarking against the following:\n\n\n* MT-Bench: a multi-turn benchmark spanning 80 dialogues and 10 domains.\n* AlpacaEval: a single-turn benchmark which evaluates the performance of chat and instruct models against 'text-davinci-003'.\n\n\nNote that MT-Bench and AlpacaEval rely on LLMs like GPT-4 to judge the quality of the model responses, and thus the ranking exhibit various biases including a preference for models distilled from GPTs. As a result, you may find that scores obtained from models trained with No Robots are lower than other synthetic datasets. For that reason, we also recommend submitting your models for human evaluation in:\n\n\n* Chatbot Arena: a live, human evaluation of chat models in head-to-head comparisons.### Languages\n\n\nThe data in No Robots are in English (BCP-47 en).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of the 'train\\_sft' or 'test\\_sft' splits looks as follows:" ]
93297e1f3436ed4955da55bdbbf6ca63d09cef3b
# Dataset Card for "java_open_ds" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shyamsubbu/java_open_ds
[ "region:us" ]
2023-11-16T00:25:26+00:00
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1169201886, "num_examples": 180000}], "download_size": 282068847, "dataset_size": 1169201886}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-16T00:29:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "java_open_ds" More Information needed
[ "# Dataset Card for \"java_open_ds\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"java_open_ds\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"java_open_ds\"\n\nMore Information needed" ]
71d58e63c705e24aeeffd9e3b285125e7751650f
# Dataset Card for "icons-dataset-1024" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kopyl/fucked-icons-dataset-1024
[ "region:us" ]
2023-11-16T01:09:08+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3914349.0, "num_examples": 110}, {"name": "validation", "num_bytes": 34051.0, "num_examples": 1}, {"name": "test", "num_bytes": 811457.0, "num_examples": 44}], "download_size": 3972613, "dataset_size": 4759857.0}}
2023-11-16T01:09:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "icons-dataset-1024" More Information needed
[ "# Dataset Card for \"icons-dataset-1024\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"icons-dataset-1024\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"icons-dataset-1024\"\n\nMore Information needed" ]
1ab3d45854ae176f7cbedd1ac29653f437cee051
# CIVQA TesseractOCR LayoutLM Dataset The Czech Invoice Visual Question Answering dataset was created with Tesseract OCR and encoded for the LayoutLM. The pre-encoded dataset can be found on this link: https://huggingface.co/datasets/fimu-docproc-research/CIVQA-TesseractOCR All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices. - Invoice number - Variable symbol - Specific symbol - Constant symbol - Bank code - Account number - ICO - Total amount - Invoice date - Due date - Name of supplier - IBAN - DIC - QR code - Supplier's address The invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: https://forms.gle/tUVJKoB22oeTncUD6 We profoundly appreciate your cooperation and understanding in this matter.
fimu-docproc-research/CIVQA-TesseractOCR-LayoutLM
[ "language:cs", "license:mit", "finance", "region:us" ]
2023-11-16T01:45:17+00:00
{"language": ["cs"], "license": "mit", "pretty_name": "C", "dataset_info": {"features": [{"name": "input_ids", "sequence": "int64"}, {"name": "bbox", "dtype": {"array2_d": {"shape": [512, 4], "dtype": "int64"}}}, {"name": "attention_mask", "sequence": "int64"}, {"name": "image", "dtype": {"array3_d": {"shape": [3, 224, 224], "dtype": "int64"}}}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}, {"name": "questions", "dtype": "string"}, {"name": "answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 198175471439, "num_examples": 160645}, {"name": "validation", "num_bytes": 20009392368, "num_examples": 16220}], "download_size": 826530358, "dataset_size": 218184863807}, "tags": ["finance"]}
2023-11-21T20:45:53+00:00
[]
[ "cs" ]
TAGS #language-Czech #license-mit #finance #region-us
# CIVQA TesseractOCR LayoutLM Dataset The Czech Invoice Visual Question Answering dataset was created with Tesseract OCR and encoded for the LayoutLM. The pre-encoded dataset can be found on this link: URL All invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices. - Invoice number - Variable symbol - Specific symbol - Constant symbol - Bank code - Account number - ICO - Total amount - Invoice date - Due date - Name of supplier - IBAN - DIC - QR code - Supplier's address The invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL We profoundly appreciate your cooperation and understanding in this matter.
[ "# CIVQA TesseractOCR LayoutLM Dataset\n\nThe Czech Invoice Visual Question Answering dataset was created with Tesseract OCR and encoded for the LayoutLM. \nThe pre-encoded dataset can be found on this link: URL\n\nAll invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.\n- Invoice number\n- Variable symbol\n- Specific symbol\n- Constant symbol\n- Bank code\n- Account number\n- ICO\n- Total amount\n- Invoice date\n- Due date\n- Name of supplier\n- IBAN\n- DIC\n- QR code\n- Supplier's address\n\nThe invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL\n\nWe profoundly appreciate your cooperation and understanding in this matter." ]
[ "TAGS\n#language-Czech #license-mit #finance #region-us \n", "# CIVQA TesseractOCR LayoutLM Dataset\n\nThe Czech Invoice Visual Question Answering dataset was created with Tesseract OCR and encoded for the LayoutLM. \nThe pre-encoded dataset can be found on this link: URL\n\nAll invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.\n- Invoice number\n- Variable symbol\n- Specific symbol\n- Constant symbol\n- Bank code\n- Account number\n- ICO\n- Total amount\n- Invoice date\n- Due date\n- Name of supplier\n- IBAN\n- DIC\n- QR code\n- Supplier's address\n\nThe invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL\n\nWe profoundly appreciate your cooperation and understanding in this matter." ]
[ 20, 256 ]
[ "passage: TAGS\n#language-Czech #license-mit #finance #region-us \n# CIVQA TesseractOCR LayoutLM Dataset\n\nThe Czech Invoice Visual Question Answering dataset was created with Tesseract OCR and encoded for the LayoutLM. \nThe pre-encoded dataset can be found on this link: URL\n\nAll invoices used in this dataset were obtained from public sources. Over these invoices, we were focusing on 15 different entities, which are crucial for processing the invoices.\n- Invoice number\n- Variable symbol\n- Specific symbol\n- Constant symbol\n- Bank code\n- Account number\n- ICO\n- Total amount\n- Invoice date\n- Due date\n- Name of supplier\n- IBAN\n- DIC\n- QR code\n- Supplier's address\n\nThe invoices included in this dataset were gathered from the internet. We understand that privacy is of utmost importance. Therefore, we sincerely apologise for any inconvenience caused by including your identifiable information in this dataset. If you have identified your data in this dataset and wish to have it removed from research purposes, we request you kindly to access the following URL: URL\n\nWe profoundly appreciate your cooperation and understanding in this matter." ]
7839934224e0c50e50143d98f8db1d56462b87b1
# A Comparative Study of Open-Source Large Language Models ## Dataset Overview Welcome to the dataset repository for our paper, "A Comparative Study of Open-Source Large Language Models, GPT-4 and Claude 2: Multiple-Choice Test Taking in Nephrology." The preprint of the paper can be accessed [here](https://arxiv.org/abs/2308.04709). ## Files This repository contains two key files: 1. **NEJM_All_Questions_And_Answers.csv**: This file includes all the questions and corresponding answers used in the study. 2. **Ground_Truth_Answers.csv**: This file provides ground truth explanations associated with the questions in the main dataset. ## Usage To utilize this dataset for your research or experimentation: 1. **Download**: Obtain the dataset files from this repository. 2. **Load**: Import the dataset into your preferred data analysis or machine learning environment. 3. **Explore**: Investigate the questions, answers, and ground truth explanations for your specific use case. ## Paper Our paper is accepted to NEJM-AI. For now please read the pre-print at the link: https://arxiv.org/abs/2308.04709
SeanWu25/NEJM-AI_Benchmarking_Medical_Language_Models
[ "size_categories:n<1K", "license:apache-2.0", "medical", "arxiv:2308.04709", "region:us" ]
2023-11-16T02:23:54+00:00
{"license": "apache-2.0", "size_categories": ["n<1K"], "tags": ["medical"]}
2023-11-16T02:31:27+00:00
[ "2308.04709" ]
[]
TAGS #size_categories-n<1K #license-apache-2.0 #medical #arxiv-2308.04709 #region-us
# A Comparative Study of Open-Source Large Language Models ## Dataset Overview Welcome to the dataset repository for our paper, "A Comparative Study of Open-Source Large Language Models, GPT-4 and Claude 2: Multiple-Choice Test Taking in Nephrology." The preprint of the paper can be accessed here. ## Files This repository contains two key files: 1. NEJM_All_Questions_And_Answers.csv: This file includes all the questions and corresponding answers used in the study. 2. Ground_Truth_Answers.csv: This file provides ground truth explanations associated with the questions in the main dataset. ## Usage To utilize this dataset for your research or experimentation: 1. Download: Obtain the dataset files from this repository. 2. Load: Import the dataset into your preferred data analysis or machine learning environment. 3. Explore: Investigate the questions, answers, and ground truth explanations for your specific use case. ## Paper Our paper is accepted to NEJM-AI. For now please read the pre-print at the link: URL
[ "# A Comparative Study of Open-Source Large Language Models", "## Dataset Overview\n\nWelcome to the dataset repository for our paper, \"A Comparative Study of Open-Source Large Language Models, GPT-4 and Claude 2: Multiple-Choice Test Taking in Nephrology.\" The preprint of the paper can be accessed here.", "## Files\n\nThis repository contains two key files:\n\n1. NEJM_All_Questions_And_Answers.csv: This file includes all the questions and corresponding answers used in the study.\n\n2. Ground_Truth_Answers.csv: This file provides ground truth explanations associated with the questions in the main dataset.", "## Usage\n\nTo utilize this dataset for your research or experimentation:\n\n1. Download: Obtain the dataset files from this repository.\n2. Load: Import the dataset into your preferred data analysis or machine learning environment.\n3. Explore: Investigate the questions, answers, and ground truth explanations for your specific use case.", "## Paper\nOur paper is accepted to NEJM-AI. For now please read the pre-print at the link: URL" ]
[ "TAGS\n#size_categories-n<1K #license-apache-2.0 #medical #arxiv-2308.04709 #region-us \n", "# A Comparative Study of Open-Source Large Language Models", "## Dataset Overview\n\nWelcome to the dataset repository for our paper, \"A Comparative Study of Open-Source Large Language Models, GPT-4 and Claude 2: Multiple-Choice Test Taking in Nephrology.\" The preprint of the paper can be accessed here.", "## Files\n\nThis repository contains two key files:\n\n1. NEJM_All_Questions_And_Answers.csv: This file includes all the questions and corresponding answers used in the study.\n\n2. Ground_Truth_Answers.csv: This file provides ground truth explanations associated with the questions in the main dataset.", "## Usage\n\nTo utilize this dataset for your research or experimentation:\n\n1. Download: Obtain the dataset files from this repository.\n2. Load: Import the dataset into your preferred data analysis or machine learning environment.\n3. Explore: Investigate the questions, answers, and ground truth explanations for your specific use case.", "## Paper\nOur paper is accepted to NEJM-AI. For now please read the pre-print at the link: URL" ]
[ 35, 13, 64, 77, 72, 26 ]
[ "passage: TAGS\n#size_categories-n<1K #license-apache-2.0 #medical #arxiv-2308.04709 #region-us \n# A Comparative Study of Open-Source Large Language Models## Dataset Overview\n\nWelcome to the dataset repository for our paper, \"A Comparative Study of Open-Source Large Language Models, GPT-4 and Claude 2: Multiple-Choice Test Taking in Nephrology.\" The preprint of the paper can be accessed here.## Files\n\nThis repository contains two key files:\n\n1. NEJM_All_Questions_And_Answers.csv: This file includes all the questions and corresponding answers used in the study.\n\n2. Ground_Truth_Answers.csv: This file provides ground truth explanations associated with the questions in the main dataset.## Usage\n\nTo utilize this dataset for your research or experimentation:\n\n1. Download: Obtain the dataset files from this repository.\n2. Load: Import the dataset into your preferred data analysis or machine learning environment.\n3. Explore: Investigate the questions, answers, and ground truth explanations for your specific use case.## Paper\nOur paper is accepted to NEJM-AI. For now please read the pre-print at the link: URL" ]
32676e3b7e01e7f0c3a56c71e687d87fdae8baad
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_Bard_Generated_xl_zeroshot" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dongyoung4091/hh-rlhf_with_features_flan_t5_large_Bard_Generated_xl_zeroshot
[ "region:us" ]
2023-11-16T02:25:06+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "helpfulness_chosen", "dtype": "int64"}, {"name": "helpfulness_rejected", "dtype": "int64"}, {"name": "specificity_chosen", "dtype": "int64"}, {"name": "specificity_rejected", "dtype": "int64"}, {"name": "intent_chosen", "dtype": "int64"}, {"name": "intent_rejected", "dtype": "int64"}, {"name": "factuality_chosen", "dtype": "int64"}, {"name": "factuality_rejected", "dtype": "int64"}, {"name": "easy-to-understand_chosen", "dtype": "int64"}, {"name": "easy-to-understand_rejected", "dtype": "int64"}, {"name": "relevance_chosen", "dtype": "int64"}, {"name": "relevance_rejected", "dtype": "int64"}, {"name": "readability_chosen", "dtype": "int64"}, {"name": "readability_rejected", "dtype": "int64"}, {"name": "enough-detail_chosen", "dtype": "int64"}, {"name": "enough-detail_rejected", "dtype": "int64"}, {"name": "biased:_chosen", "dtype": "int64"}, {"name": "biased:_rejected", "dtype": "int64"}, {"name": "fail-to-consider-individual-preferences_chosen", "dtype": "int64"}, {"name": "fail-to-consider-individual-preferences_rejected", "dtype": "int64"}, {"name": "repetetive_chosen", "dtype": "int64"}, {"name": "repetetive_rejected", "dtype": "int64"}, {"name": "fail-to-consider-context_chosen", "dtype": "int64"}, {"name": "fail-to-consider-context_rejected", "dtype": "int64"}, {"name": "too-long_chosen", "dtype": "int64"}, {"name": "too-long_rejected", "dtype": "int64"}, {"name": "human", "dtype": "string"}, {"name": "assistant_chosen", "dtype": "string"}, {"name": "assistant_rejected", "dtype": "string"}, {"name": "log_score_chosen", "dtype": "float64"}, {"name": "log_score_rejected", "dtype": "float64"}, {"name": "labels", "dtype": "string"}, {"name": "zeroshot_relevance_chosen", "dtype": "int64"}, {"name": "zeroshot_relevance_rejected", "dtype": "int64"}, {"name": "zeroshot_completeness_chosen", "dtype": "int64"}, {"name": "zeroshot_completeness_rejected", "dtype": "int64"}, {"name": "zeroshot_clarity_chosen", "dtype": "int64"}, {"name": "zeroshot_clarity_rejected", "dtype": "int64"}, {"name": "zeroshot_accuracy_chosen", "dtype": "int64"}, {"name": "zeroshot_accuracy_rejected", "dtype": "int64"}, {"name": "zeroshot_objectivity_chosen", "dtype": "int64"}, {"name": "zeroshot_objectivity_rejected", "dtype": "int64"}, {"name": "zeroshot_creativity_chosen", "dtype": "int64"}, {"name": "zeroshot_creativity_rejected", "dtype": "int64"}, {"name": "zeroshot_engagement_chosen", "dtype": "int64"}, {"name": "zeroshot_engagement_rejected", "dtype": "int64"}, {"name": "zeroshot_personalization_chosen", "dtype": "int64"}, {"name": "zeroshot_personalization_rejected", "dtype": "int64"}, {"name": "zeroshot_politeness_chosen", "dtype": "int64"}, {"name": "zeroshot_politeness_rejected", "dtype": "int64"}, {"name": "zeroshot_efficiency_chosen", "dtype": "int64"}, {"name": "zeroshot_efficiency_rejected", "dtype": "int64"}, {"name": "zeroshot_timeliness_chosen", "dtype": "int64"}, {"name": "zeroshot_timeliness_rejected", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16119448, "num_examples": 9574}, {"name": "test", "num_bytes": 16063373, "num_examples": 9574}], "download_size": 16064053, "dataset_size": 32182821}}
2023-11-16T02:25:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hh-rlhf_with_features_flan_t5_large_Bard_Generated_xl_zeroshot" More Information needed
[ "# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_Bard_Generated_xl_zeroshot\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_Bard_Generated_xl_zeroshot\"\n\nMore Information needed" ]
[ 6, 40 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hh-rlhf_with_features_flan_t5_large_Bard_Generated_xl_zeroshot\"\n\nMore Information needed" ]
41aeaebb985f617c9bb280c031dfb82b9cc1f723
# Dataset Card for "hh-generated_flan_t5_large_flan_t5_zeroshot_DA_Bard_Generated" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dongyoung4091/hh-generated_flan_t5_large_flan_t5_zeroshot_DA_Bard_Generated
[ "region:us" ]
2023-11-16T02:25:46+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "zeroshot_relevance", "dtype": "float64"}, {"name": "zeroshot_completeness", "dtype": "float64"}, {"name": "zeroshot_clarity", "dtype": "float64"}, {"name": "zeroshot_accuracy", "dtype": "float64"}, {"name": "zeroshot_objectivity", "dtype": "float64"}, {"name": "zeroshot_creativity", "dtype": "float64"}, {"name": "zeroshot_engagement", "dtype": "float64"}, {"name": "zeroshot_personalization", "dtype": "float64"}, {"name": "zeroshot_politeness", "dtype": "float64"}, {"name": "zeroshot_efficiency", "dtype": "float64"}, {"name": "zeroshot_timeliness", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 5926757, "num_examples": 25600}], "download_size": 765908, "dataset_size": 5926757}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-16T02:25:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hh-generated_flan_t5_large_flan_t5_zeroshot_DA_Bard_Generated" More Information needed
[ "# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_zeroshot_DA_Bard_Generated\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_zeroshot_DA_Bard_Generated\"\n\nMore Information needed" ]
[ 6, 39 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hh-generated_flan_t5_large_flan_t5_zeroshot_DA_Bard_Generated\"\n\nMore Information needed" ]
6f2143a963f96bc31885fc3f45ac96256832efd3
# Dataset Card for "samsum_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pvisnrt/samsum_dataset
[ "region:us" ]
2023-11-16T02:27:54+00:00
{"dataset_info": {"features": [{"name": "source", "sequence": "string"}, {"name": "summary_target", "sequence": "string"}, {"name": "gold_tags", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 15403808, "num_examples": 13095}, {"name": "validation", "num_bytes": 1903222, "num_examples": 1637}, {"name": "test", "num_bytes": 1932694, "num_examples": 1637}], "download_size": 5756246, "dataset_size": 19239724}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-11-16T02:27:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "samsum_dataset" More Information needed
[ "# Dataset Card for \"samsum_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"samsum_dataset\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"samsum_dataset\"\n\nMore Information needed" ]
4a9ab88ba133d6e8ad93680085ff75b0a9012bb3
# Dataset Card for "SpeechSummaryMatching_How2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DynamicSuperb/SpeechSummaryMatching_How2
[ "region:us" ]
2023-11-16T02:39:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "file", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 7735470342.0, "num_examples": 609}], "download_size": 7728607178, "dataset_size": 7735470342.0}}
2023-11-16T02:46:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "SpeechSummaryMatching_How2" More Information needed
[ "# Dataset Card for \"SpeechSummaryMatching_How2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"SpeechSummaryMatching_How2\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"SpeechSummaryMatching_How2\"\n\nMore Information needed" ]
add2d9c8098f5344472301a900a11e16977a36ec
# Dataset Card for "amazon_review_automotive_rap" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zxvix/amazon_review_automotive_rap
[ "region:us" ]
2023-11-16T02:52:10+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 108204.0, "num_examples": 100}], "download_size": 76498, "dataset_size": 108204.0}}
2023-11-16T02:52:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "amazon_review_automotive_rap" More Information needed
[ "# Dataset Card for \"amazon_review_automotive_rap\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"amazon_review_automotive_rap\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"amazon_review_automotive_rap\"\n\nMore Information needed" ]
87890a953f86b39158b835e367218cc68a55f8f2
# Dataset Card for "law_court_opinion_rap" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zxvix/law_court_opinion_rap
[ "region:us" ]
2023-11-16T03:05:13+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 181840.0, "num_examples": 100}], "download_size": 121504, "dataset_size": 181840.0}}
2023-11-16T03:05:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "law_court_opinion_rap" More Information needed
[ "# Dataset Card for \"law_court_opinion_rap\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"law_court_opinion_rap\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"law_court_opinion_rap\"\n\nMore Information needed" ]
33f4e27caf2ad81c9faa8c2281d9835a44cef8c5
Put this page [https://www.openbsd.org/faq/faq1.html](https://www.openbsd.org/faq/faq1.html) into Q/A format for a custom chatbot talking about OpenBSD on [ajsbsd.net](https://ajsbsd.net)
ajsbsd/openbsd-faq
[ "license:bsd", "region:us" ]
2023-11-16T03:13:34+00:00
{"license": "bsd"}
2023-11-16T03:32:39+00:00
[]
[]
TAGS #license-bsd #region-us
Put this page URL into Q/A format for a custom chatbot talking about OpenBSD on URL
[]
[ "TAGS\n#license-bsd #region-us \n" ]
[ 12 ]
[ "passage: TAGS\n#license-bsd #region-us \n" ]
bb7e734b937785ca186c0859e066214dd3056666
# "Emoji_for_diffusion" Dataset ## Description This data set includes various style emoji and their description from different apps. Each image is sized with 64*64, which is easy to train in your personal GPU, and has **RGBA** channels. The description text is formatted as follows: ``` app/company + emoji content + description information ``` You can use this dataset to train your personal diffusion model. I sincerely hope this dataset can help your research work. ## Citation If you use this dataset, please cite it as: ``` @misc{ChengAoShen2023emoji, author = {ChengAo Shen and Siyuan Mu}, title = {emoji_for_diffusion}, year={2023}, howpublished= {\url{https://huggingface.co/datasets/ChengAoShen/emoji_for_diffusion/}} } ```
ChengAoShen/emoji_with_text
[ "task_categories:text-to-image", "size_categories:10K<n<100K", "language:en", "license:mit", "art", "region:us" ]
2023-11-16T03:15:33+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-image"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 197767083.176, "num_examples": 47192}], "download_size": 150864115, "dataset_size": 197767083.176}, "tags": ["art"]}
2023-11-16T03:31:46+00:00
[]
[ "en" ]
TAGS #task_categories-text-to-image #size_categories-10K<n<100K #language-English #license-mit #art #region-us
# "Emoji_for_diffusion" Dataset ## Description This data set includes various style emoji and their description from different apps. Each image is sized with 64*64, which is easy to train in your personal GPU, and has RGBA channels. The description text is formatted as follows: You can use this dataset to train your personal diffusion model. I sincerely hope this dataset can help your research work. If you use this dataset, please cite it as:
[ "# \"Emoji_for_diffusion\" Dataset", "## Description \nThis data set includes various style emoji and their description from different apps.\nEach image is sized with 64*64, which is easy to train in your personal GPU, and has RGBA channels.\nThe description text is formatted as follows:\n\n\n\nYou can use this dataset to train your personal diffusion model. I sincerely hope this dataset can help your research work.\n\nIf you use this dataset, please cite it as:" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-10K<n<100K #language-English #license-mit #art #region-us \n", "# \"Emoji_for_diffusion\" Dataset", "## Description \nThis data set includes various style emoji and their description from different apps.\nEach image is sized with 64*64, which is easy to train in your personal GPU, and has RGBA channels.\nThe description text is formatted as follows:\n\n\n\nYou can use this dataset to train your personal diffusion model. I sincerely hope this dataset can help your research work.\n\nIf you use this dataset, please cite it as:" ]
[ 41, 13, 93 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-10K<n<100K #language-English #license-mit #art #region-us \n# \"Emoji_for_diffusion\" Dataset## Description \nThis data set includes various style emoji and their description from different apps.\nEach image is sized with 64*64, which is easy to train in your personal GPU, and has RGBA channels.\nThe description text is formatted as follows:\n\n\n\nYou can use this dataset to train your personal diffusion model. I sincerely hope this dataset can help your research work.\n\nIf you use this dataset, please cite it as:" ]