sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
60517de6058c356db813b0969e3e22fe89965356
# Dataset Card for "ce-aesthetics-a2z" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
previsone/ce-aesthetics-a2z
[ "region:us" ]
2023-03-23T19:13:48+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 126654282.663, "num_examples": 3837}], "download_size": 86045928, "dataset_size": 126654282.663}}
2023-03-23T19:49:41+00:00
50cf8973a52af7b4444ebc5dcbb5522b4dda69e8
# Dataset Card for "gen-qm-17000" ### Dataset Summary Dataset for converting request into query and extracting model name. DEV/VAL/TEST: 90/10/10 SIZE: 17000 ### Supported Tasks and Leaderboards The tasks represented in GEN-QM cover a text2text generation for producing qureries based on request or extracting models. ### Languages The data in QM are in English. ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```bash { 'answer': '$count(EventCategory.Children) $neq 1029', 'utterance': 'Instructions: Based on Request and Model Description generate query with represents requests filter. Generaly query statement consists of path to the models column on the left, operator of comparison in the middle started with $ and comparison value on the right. Also query can contain more than one statement combined with $and or $or operator.\nModel Description: CreatedByUserName as created by user name;ModifiedByUserName as modified by user name;CreatedOn as created on;ModifiedOn as modified on;EventCategory.IsApprovalRequired as is approval required of experience category;EventCategory.Name as name of experience category;EventCategory.Code as code of experience category;EventCategory.CreatedByUserName as created by user name of experience category;EventCategory.ModifiedByUserName as modified by user name of experience category;EventCategory.Priority as priority of experience category;EventCategory.CreatedOn as created on of experience category;EventCategory.ModifiedOn as modified on of experience category;EventCategory.EventInCategories as experience in categories of experience category,event in categories of event category;EventCategory.EventCategoryInTypes as event category in types of experience category,experience category in types of event category;EventCategory.Children as children of experience category,children categories of event category;EventCategoryType.Name as name of experience category type;EventCategoryType.CreatedByUserName as created by user name of experience category type;EventCategoryType.ModifiedByUserName as modified by user name of experience category type;EventCategoryType.CreatedOn as created on of experience category type;EventCategoryType.ModifiedOn as modified on of experience category type;EventCategoryType.EventCategoryInTypes as event category in types of experience category type,experience category in types of event category type\nRequest: select event category in type where count of children of experience category != one thousand and twenty-nine\nQuery:' } ``` ## Additional Information ### Licensing Information The dataset is released under Apache 2.0.
dkuntso/gen-qm-17000
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "region:us" ]
2023-03-23T19:29:27+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Generate Query/Model from Request 15000/1000/1000", "dataset_info": {"features": [{"name": "utterance", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27449266, "num_examples": 14960}, {"name": "test", "num_bytes": 1929362, "num_examples": 1020}, {"name": "validation", "num_bytes": 1871516, "num_examples": 1020}], "download_size": 3761317, "dataset_size": 31250144}}
2023-03-24T21:51:38+00:00
23c07577ae7d98d696806b794289926673929de6
# Dataset Card for "rvl_cdip_100_examples_per_class" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jordyvl/rvl_cdip_100_examples_per_class
[ "region:us" ]
2023-03-23T19:58:02+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}], "splits": [{"name": "train", "num_bytes": 97000316.76, "num_examples": 800}, {"name": "test", "num_bytes": 48612840.21, "num_examples": 400}, {"name": "validation", "num_bytes": 48666549.76, "num_examples": 400}], "download_size": 180034173, "dataset_size": 194279706.73}}
2023-03-23T20:55:18+00:00
d427aa9782e70e6a9e5e31610469b5a06d9bab68
For LXMERT model fine-tuning on airsim imagery.
mperic/lxmert-airsim
[ "region:us" ]
2023-03-23T21:44:25+00:00
{}
2023-03-23T21:48:42+00:00
de55b3c3b326225f113d20c649aa63287caf1d4a
# Dataset Card for "flores200_eng_scaffolding" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hlillemark/flores200_eng_scaffolding
[ "region:us" ]
2023-03-23T21:45:22+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_lang", "dtype": "string"}, {"name": "target_lang", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "eng_source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5588764908, "num_examples": 10240000}], "download_size": 4223075178, "dataset_size": 5588764908}}
2023-03-24T00:49:42+00:00
62617daca7aeb4d38375c92f50369ff679789d13
# Dataset Card for "somos-alpaca-es" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cariai/somos-alpaca-es
[ "region:us" ]
2023-03-23T22:20:12+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 985053979, "num_examples": 52002}], "download_size": 655032424, "dataset_size": 985053979}}
2023-04-09T23:01:20+00:00
5a7833ef23cd86563e42985dcb169cb10d41d1f6
# Dataset Card for "lstm-deep-usc-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hieuhocnlp/lstm-deep-usc-test
[ "region:us" ]
2023-03-23T22:34:46+00:00
{"dataset_info": {"features": [{"name": "line", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 770852, "num_examples": 55043}], "download_size": 466754, "dataset_size": 770852}}
2023-03-23T22:34:49+00:00
e33428fdf88a9af3d0b725b317d8b511e1e2af9f
# Tree-disease ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<259x194 RGB PIL image>", "target": 1 }, { "image": "<275x183 RGB PIL image>", "target": 16 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['Agrilus planipennis \u6241\u8c46', 'Annosum Root Rot \u756a\u8354\u679d\u6839\u8150\u75c5', 'Anthracnose \u70ad\u75bd\u75c5', 'Black knot (lethal disease) \u9ed1\u7d50\uff08\u81f4\u547d\u75be\u75c5\uff09', 'Dendroctonus micans \u96f2\u6749', 'Dieback \u67af\u6b7b', 'Diffuse cankers\\xa0\u7030\u6f2b\u6027\u6f70\u760d', 'Fusiform rust \u68ad\u5f62\u92b9\u75c5', 'Hardwood Leaf Diseases\u786c\u6728\u8449\u75c5', 'Hymenoscyphus fraxineus \u767d\u881f\u87ec', 'Leaf Blister \u8449\u6ce1', 'Leaf Spots \u8449\u6591', 'Littleleaf Disease \u5c0f\u8449\u75c5', 'Loblolly Pine Decline \u706b\u70ac\u677e\u8870\u843d', 'Needle Blights \u91dd\u8449\u67af\u75c5', 'Needle Rusts \u91dd\u8449\u92b9\u75c5', 'Powdery Mildew \u767d\u7c89\u75c5', 'Root rots \u6839\u8150\u75c5', 'Rots and Decays \u8150\u721b', 'Stem decays \u8396\u8150\u721b', 'Tar Spot \u7126\u6cb9\u6591', 'Wilts \u67af\u840e\u75c5'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 105 | | valid | 39 |
OttoYu/TreeDiseases
[ "task_categories:image-classification", "region:us" ]
2023-03-23T23:03:30+00:00
{"task_categories": ["image-classification"]}
2023-03-23T23:20:12+00:00
93776af453ebe85fe76156ebc3775fd7a75af659
# Speed dating The [Speed dating dataset](https://www.openml.org/search?type=data&sort=nr_of_likes&status=active&id=40536) from OpenML. # Configurations and tasks | **Configuration** | **Task** | Description | |-------------------|---------------------------|---------------------------------------------------------------| | dating | Binary classification | Will the two date? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/speeddating")["train"] ``` # Features |**Features** |**Type** | |---------------------------------------------------|---------| |`is_dater_male` |`int8` | |`dater_age` |`int8` | |`dated_age` |`int8` | |`age_difference` |`int8` | |`dater_race` |`string` | |`dated_race` |`string` | |`are_same_race` |`int8` | |`same_race_importance_for_dater` |`float64`| |`same_religion_importance_for_dater` |`float64`| |`attractiveness_importance_for_dated` |`float64`| |`sincerity_importance_for_dated` |`float64`| |`intelligence_importance_for_dated` |`float64`| |`humor_importance_for_dated` |`float64`| |`ambition_importance_for_dated` |`float64`| |`shared_interests_importance_for_dated` |`float64`| |`attractiveness_score_of_dater_from_dated` |`float64`| |`sincerity_score_of_dater_from_dated` |`float64`| |`intelligence_score_of_dater_from_dated` |`float64`| |`humor_score_of_dater_from_dated` |`float64`| |`ambition_score_of_dater_from_dated` |`float64`| |`shared_interests_score_of_dater_from_dated` |`float64`| |`attractiveness_importance_for_dater` |`float64`| |`sincerity_importance_for_dater` |`float64`| |`intelligence_importance_for_dater` |`float64`| |`humor_importance_for_dater` |`float64`| |`ambition_importance_for_dater` |`float64`| |`shared_interests_importance_for_dater` |`float64`| |`self_reported_attractiveness_of_dater` |`float64`| |`self_reported_sincerity_of_dater` |`float64`| |`self_reported_intelligence_of_dater` |`float64`| |`self_reported_humor_of_dater` |`float64`| |`self_reported_ambition_of_dater` |`float64`| |`reported_attractiveness_of_dated_from_dater` |`float64`| |`reported_sincerity_of_dated_from_dater` |`float64`| |`reported_intelligence_of_dated_from_dater` |`float64`| |`reported_humor_of_dated_from_dater` |`float64`| |`reported_ambition_of_dated_from_dater` |`float64`| |`reported_shared_interests_of_dated_from_dater` |`float64`| |`dater_interest_in_sports` |`float64`| |`dater_interest_in_tvsports` |`float64`| |`dater_interest_in_exercise` |`float64`| |`dater_interest_in_dining` |`float64`| |`dater_interest_in_museums` |`float64`| |`dater_interest_in_art` |`float64`| |`dater_interest_in_hiking` |`float64`| |`dater_interest_in_gaming` |`float64`| |`dater_interest_in_clubbing` |`float64`| |`dater_interest_in_reading` |`float64`| |`dater_interest_in_tv` |`float64`| |`dater_interest_in_theater` |`float64`| |`dater_interest_in_movies` |`float64`| |`dater_interest_in_concerts` |`float64`| |`dater_interest_in_music` |`float64`| |`dater_interest_in_shopping` |`float64`| |`dater_interest_in_yoga` |`float64`| |`interests_correlation` |`float64`| |`expected_satisfaction_of_dater` |`float64`| |`expected_number_of_likes_of_dater_from_20_people` |`int8` | |`expected_number_of_dates_for_dater` |`int8` | |`dater_liked_dated` |`float64`| |`probability_dated_wants_to_date` |`float64`| |`already_met_before` |`int8` | |`dater_wants_to_date` |`int8` | |`dated_wants_to_date` |`int8` |
mstz/speeddating
[ "task_categories:tabular-classification", "size_categories:1K<n<10K", "language:en", "speeddating", "tabular_classification", "binary_classification", "region:us" ]
2023-03-23T23:41:42+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["tabular-classification"], "pretty_name": "Speed dating", "tags": ["speeddating", "tabular_classification", "binary_classification"], "configs": ["dating"]}
2023-04-07T13:54:21+00:00
648b5910046f2ebdfd9a3821e396bba242560359
# Dataset Card for Swiss Court View Generation ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Swiss Judgment Prediction is a multilingual, diachronic dataset of 329K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task. ### Supported Tasks and Leaderboards ### Languages Switzerland has four official languages with three languages German, French and Italian being represented. The decisions are written by the judges and clerks in the language of the proceedings. | Language | Subset | Number of Documents Full | |------------|------------|--------------------------| | German | **de** | 160K | | French | **fr** | 128K | | Italian | **it** | 41K | ## Dataset Structure ### Data Fields ``` - decision_id: unique identifier for the decision - facts: facts section of the decision - considerations: considerations section of the decision - label: label of the decision - law_area: area of law of the decision - language: language of the decision - year: year of the decision - court: court of the decision - chamber: chamber of the decision - canton: canton of the decision - region: region of the decision ``` ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process #### Who are the annotators? Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237) ``` @misc{rasiah2023scale, title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation}, author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus}, year={2023}, eprint={2306.09237}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions
rcds/swiss_judgment_prediction_xl
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:it", "language:de", "language:fr", "license:cc-by-sa-4.0", "arxiv:2306.09237", "region:us" ]
2023-03-23T23:42:15+00:00
{"language": ["it", "de", "fr"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "Swiss Judgment Prediction XL"}
2023-07-20T06:31:57+00:00
bd8c5efe1e0d1c1a3c296fde2d07059f91251faa
# Dataset Card for "somos-alpaca-es-validations" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dvilasuero/somos-alpaca-es-validations
[ "region:us" ]
2023-03-23T23:42:57+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 946357, "num_examples": 50}], "download_size": 0, "dataset_size": 946357}}
2023-03-24T08:00:37+00:00
c45e69a0612c56c070a76759f6ae809252212c9c
# Dataset Card for "test-listeners-sync" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dvilasuero/test-listeners-sync
[ "region:us" ]
2023-03-23T23:57:17+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 2079271, "num_examples": 110}], "download_size": 1853526, "dataset_size": 2079271}}
2023-03-24T00:17:09+00:00
78ba808f1e05e1346d98bba2885e33c5c3d1ceb2
# Dataset Card for "processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/processed
[ "region:us" ]
2023-03-23T23:58:58+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 548644785, "num_examples": 191025}], "download_size": 80309791, "dataset_size": 548644785}}
2023-03-23T23:59:24+00:00
96e8122530d31eecb964757dc9125062ffeffc33
# Dataset Card for "ms_marco_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
0x70DA/ms_marco_clean
[ "region:us" ]
2023-03-24T00:04:06+00:00
{"dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 25195165.999419145, "num_examples": 7059}, {"name": "train", "num_bytes": 208554643.03976434, "num_examples": 58192}, {"name": "test", "num_bytes": 24321156.439980637, "num_examples": 6814}], "download_size": 135125755, "dataset_size": 258070965.47916412}}
2023-03-24T00:04:29+00:00
c353cc3d919d6557882d466443410aa8b7ccebab
# Dataset Card for "nuevods-listener" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dvilasuero/nuevods-listener
[ "region:us" ]
2023-03-24T00:21:04+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 606609, "num_examples": 32}], "download_size": 0, "dataset_size": 606609}}
2023-03-24T10:02:23+00:00
5e6567b7235a2b246d968b87c3e9beee73c1d609
# Wine The [Wine dataset](https://www.kaggle.com/datasets/ghassenkhaled/wine-quality-data) from Kaggle. Classify wine as red or white. # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-----------------------------------------------------------------| | wine | Binary classification | Is this red wine? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/wine")["train"] ```
mstz/wine
[ "task_categories:tabular-classification", "size_categories:1K<n<10K", "language:en", "license:cc", "wine", "tabular_classification", "binary_classification", "region:us" ]
2023-03-24T00:29:02+00:00
{"language": ["en"], "license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["tabular-classification"], "pretty_name": "Wine quality", "tags": ["wine", "tabular_classification", "binary_classification"], "configs": ["wine"]}
2023-04-07T14:11:56+00:00
4d938e13d1908c34c537a7035b782d1d719cff3b
# Diamonds The [Diamonds dataset](https://www.kaggle.com/datasets/ulrikthygepedersen/diamonds) from Kaggle. Dataset collecting properties of cut diamonds to determine the cut quality. # Configurations and tasks | **Configuration** | **Task** | Description | |-------------------|---------------------------|-----------------------------------------------------------------| | encoding | | Encoding dictionary showing original values of encoded features.| | cut | Multiclass classification | Predict the cut quality of the diamond. | | cut_binary | Binary classification | Is the cut quality at least very good?| # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/diamonds", "cut")["train"] ``` # Features |**Feature** |**Description**| |-----------------------------------|---------------| |`carat` | `float32` | |`color` | `string` | |`clarity` | `float32` | |`depth` | `float32` | |`table` | `float32` | |`price` | `float32` | |`observation_point_on_axis_x` | `float32` | |`observation_point_on_axis_y` | `float32` | |`observation_point_on_axis_z` | `float32` | |`cut` | `int8` |
mstz/diamonds
[ "task_categories:tabular-classification", "size_categories:10K<n<100K", "language:en", "license:cc", "student performance", "tabular_classification", "multiclass_classification", "UCI", "region:us" ]
2023-03-24T01:12:26+00:00
{"language": ["en"], "license": "cc", "size_categories": ["10K<n<100K"], "task_categories": ["tabular-classification"], "pretty_name": "Diamond", "tags": ["student performance", "tabular_classification", "multiclass_classification", "UCI"], "configs": ["encoding", "cut", "cut_binary"]}
2023-04-16T16:27:20+00:00
f7e0a12bbac4ee107be734d70b0d126621b0ea61
# Dataset Card for "GamePhysics Captions" AI generated captions for the [GamePhysics Dataset](https://huggingface.co/datasets/taesiri/GamePhysics). [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
asgaardlab/GamephysicsCaptions
[ "task_categories:image-to-text", "size_categories:1M<n<10M", "language:en", "license:openrail", "game", "region:us" ]
2023-03-24T01:22:38+00:00
{"language": ["en"], "license": "openrail", "size_categories": ["1M<n<10M"], "task_categories": ["image-to-text"], "pretty_name": "GamePhysics Captions", "dataset_info": {"features": [{"name": "video_id", "dtype": "string"}, {"name": "game_names", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "blip2-opt-6.7b-8bit", "dtype": "string"}, {"name": "blip2-opt-6.7b", "dtype": "string"}, {"name": "coca_ViT-L-14", "dtype": "string"}, {"name": "git-large-textcaps_captions", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 17264648013.888, "num_examples": 1843968}], "download_size": 17050299277, "dataset_size": 17264648013.888}, "tags": ["game"]}
2023-03-24T02:41:51+00:00
9aeaed565fbec152d51f5ba2ab006976362b0464
# Dataset Card for "gen.1.celeba" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lansinuote/gen.1.celeba
[ "region:us" ]
2023-03-24T03:36:48+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "5_o_Clock_Shadow", "dtype": "int64"}, {"name": "Arched_Eyebrows", "dtype": "int64"}, {"name": "Attractive", "dtype": "int64"}, {"name": "Bags_Under_Eyes", "dtype": "int64"}, {"name": "Bald", "dtype": "int64"}, {"name": "Bangs", "dtype": "int64"}, {"name": "Big_Lips", "dtype": "int64"}, {"name": "Big_Nose", "dtype": "int64"}, {"name": "Black_Hair", "dtype": "int64"}, {"name": "Blond_Hair", "dtype": "int64"}, {"name": "Blurry", "dtype": "int64"}, {"name": "Brown_Hair", "dtype": "int64"}, {"name": "Bushy_Eyebrows", "dtype": "int64"}, {"name": "Chubby", "dtype": "int64"}, {"name": "Double_Chin", "dtype": "int64"}, {"name": "Eyeglasses", "dtype": "int64"}, {"name": "Goatee", "dtype": "int64"}, {"name": "Gray_Hair", "dtype": "int64"}, {"name": "Heavy_Makeup", "dtype": "int64"}, {"name": "High_Cheekbones", "dtype": "int64"}, {"name": "Male", "dtype": "int64"}, {"name": "Mouth_Slightly_Open", "dtype": "int64"}, {"name": "Mustache", "dtype": "int64"}, {"name": "Narrow_Eyes", "dtype": "int64"}, {"name": "No_Beard", "dtype": "int64"}, {"name": "Oval_Face", "dtype": "int64"}, {"name": "Pale_Skin", "dtype": "int64"}, {"name": "Pointy_Nose", "dtype": "int64"}, {"name": "Receding_Hairline", "dtype": "int64"}, {"name": "Rosy_Cheeks", "dtype": "int64"}, {"name": "Sideburns", "dtype": "int64"}, {"name": "Smiling", "dtype": "int64"}, {"name": "Straight_Hair", "dtype": "int64"}, {"name": "Wavy_Hair", "dtype": "int64"}, {"name": "Wearing_Earrings", "dtype": "int64"}, {"name": "Wearing_Hat", "dtype": "int64"}, {"name": "Wearing_Lipstick", "dtype": "int64"}, {"name": "Wearing_Necklace", "dtype": "int64"}, {"name": "Wearing_Necktie", "dtype": "int64"}, {"name": "Young", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1474211218.427, "num_examples": 202599}], "download_size": 1396302346, "dataset_size": 1474211218.427}}
2023-03-24T03:46:24+00:00
1f6a5c585340dc797abbb3524749f71e05109d05
# Dataset Card for "DTD_parition1_test_google_flan_t5_xxl_mode_C_A_T_ns_200" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/DTD_parition1_test_google_flan_t5_xxl_mode_C_A_T_ns_200
[ "region:us" ]
2023-03-24T03:42:50+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 68037, "num_examples": 200}], "download_size": 28182, "dataset_size": 68037}}
2023-03-24T03:55:38+00:00
5f173a0826ced8d9ed08c2d5b740525747188382
# Dataset Card for "DTD_parition1_test_google_flan_t5_xxl_mode_C_A_T_ns_1880" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/DTD_parition1_test_google_flan_t5_xxl_mode_C_A_T_ns_1880
[ "region:us" ]
2023-03-24T04:10:06+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_1_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 1236913, "num_examples": 1880}, {"name": "fewshot_3_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 2420309, "num_examples": 1880}, {"name": "fewshot_5_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 3604225, "num_examples": 1880}, {"name": "fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 644439, "num_examples": 1880}, {"name": "fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 543264, "num_examples": 1880}, {"name": "fewshot_0_clip_tags_ViT_L_14_Attributes_ViT_L_14_text_davinci_003_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 969589, "num_examples": 1880}], "download_size": 2542967, "dataset_size": 9418739}}
2023-04-04T01:54:10+00:00
686b639bab478925e953b19d92afa46039a2840c
# Dataset Card for "DTD_parition1_test_google_flan_t5_xl_mode_C_A_T_ns_1880" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/DTD_parition1_test_google_flan_t5_xl_mode_C_A_T_ns_1880
[ "region:us" ]
2023-03-24T04:46:12+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_1_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 1236606, "num_examples": 1880}, {"name": "fewshot_3_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 2420007, "num_examples": 1880}, {"name": "fewshot_5_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_rices", "num_bytes": 3603873, "num_examples": 1880}, {"name": "fewshot_3_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 2495169, "num_examples": 1880}], "download_size": 2439789, "dataset_size": 9755655}}
2023-03-29T16:06:07+00:00
71c207d4c6a77b55d134a2338fbd2e7c78253cc0
logh/myself
[ "license:unknown", "region:us" ]
2023-03-24T05:30:23+00:00
{"license": "unknown"}
2023-04-09T11:03:06+00:00
c49ae65cb6544feeeee0627b44c2e0a0ebeb9203
# Dataset Card for SpaCE2021 ## Dataset Description - **Homepage:** http://ccl.pku.edu.cn:8084/SpaCE2021/ - **Repository:** https://github.com/2030NLP/SpaCE2021 - **Paper:** [詹卫东、孙春晖、岳朋雪、唐乾桐、秦梓巍,2022,空间语义理解能力评测任务设计的新思路——SpaCE2021数据集的研制,《语言文字应用》2022年第2期(总第122期),pp.99-110。](https://yyyy.cbpt.cnki.net/WKC/WebPublication/paperDigest.aspx?paperID=c66cca51-7783-430e-abf1-28f6c28c49f6) - **Leaderboard:** https://github.com/2030NLP/SpaCE2021 - **Point of Contact:** [email protected] ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Chinese ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
2030NLP/SpaCE2021
[ "task_categories:text-classification", "task_ids:acceptability-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "size_categories:10K<n<100K", "source_datasets:ccl", "language:zh", "license:cc-by-nc-sa-4.0", "region:us" ]
2023-03-24T05:36:13+00:00
{"annotations_creators": ["crowdsourced", "expert-generated", "machine-generated"], "language": ["zh"], "license": "cc-by-nc-sa-4.0", "size_categories": ["10K<n<100K"], "source_datasets": ["ccl"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference"], "pretty_name": "space21", "dataset_info": [{"config_name": "task1", "features": [{"name": "qID", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "judge1", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1470413, "num_examples": 4237}, {"name": "validation", "num_bytes": 321061, "num_examples": 806}, {"name": "test", "num_bytes": 263854, "num_examples": 794}], "download_size": 2373041, "dataset_size": 2055328}, {"config_name": "task2", "features": [{"name": "qID", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "reason", "dtype": "string"}, {"name": "judge2", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 2586476, "num_examples": 5989}, {"name": "validation", "num_bytes": 712348, "num_examples": 2088}, {"name": "test", "num_bytes": 773393, "num_examples": 1952}], "download_size": 4607294, "dataset_size": 4072217}, {"config_name": "task3", "features": [{"name": "qID", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "reason", "dtype": "string"}, {"name": "judge1", "dtype": "bool"}, {"name": "judge2", "dtype": "bool"}], "splits": [{"name": "validation", "num_bytes": 539209, "num_examples": 1203}, {"name": "test", "num_bytes": 445760, "num_examples": 1167}], "download_size": 1110504, "dataset_size": 984969}]}
2023-04-03T16:38:28+00:00
a0f4e4451d4a3fc80ff825d6c0c292000f1ef00d
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages ZH-CN-HANS ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
2030NLP/SpaCE2022
[ "task_categories:text-classification", "task_categories:feature-extraction", "size_categories:10K<n<100K", "language:zh", "spatial", "cognitive", "region:us" ]
2023-03-24T05:36:55+00:00
{"language": ["zh"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "feature-extraction"], "pretty_name": "SpaCE2022", "tags": ["spatial", "cognitive"], "dataset_info": [{"config_name": "task1", "features": [{"name": "qid", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "judge", "dtype": "int8"}], "splits": [{"name": "train", "num_bytes": 4018440, "num_examples": 10993}, {"name": "validation", "num_bytes": 599209, "num_examples": 1602}], "download_size": 4932714, "dataset_size": 4617649}, {"config_name": "task2", "features": [{"name": "qid", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "reasons", "sequence": [{"name": "fragments", "sequence": [{"name": "role", "dtype": {"class_label": {"names": {"0": "S", "1": "P", "2": "E", "3": "S1", "4": "P1", "5": "E1", "6": "S2", "7": "P2", "8": "E2", "9": "text1", "10": "text2"}}}}, {"name": "text", "dtype": "string"}, {"name": "idxes", "sequence": "int32"}]}, {"name": "type", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C"}}}}]}], "splits": [{"name": "train", "num_bytes": 2655240, "num_examples": 4966}, {"name": "validation", "num_bytes": 370883, "num_examples": 700}], "download_size": 3543914, "dataset_size": 3026123}]}
2023-12-28T11:56:21+00:00
797bfb9e89778e1591e9a6a48dac84463cb2a315
### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<265x190 RGB PIL image>", "target": 10 }, { "image": "<800x462 RGB PIL image>", "target": 6 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['Burls \u7bc0\u7624', 'Canker \u6f70\u760d', 'Co-dominant branches \u7b49\u52e2\u679d', 'Co-dominant stems \u7b49\u52e2\u5e79', 'Cracks or splits \u88c2\u7e2b\u6216\u88c2\u958b', 'Crooks or abrupt bends \u4e0d\u5e38\u898f\u5f4e\u66f2', 'Cross branches \u758a\u679d', 'Dead surface roots \u8868\u6839\u67af\u840e ', 'Deadwood \u67af\u6728', 'Decay or cavity \u8150\u721b\u6216\u6a39\u6d1e', 'Fungal fruiting bodies \u771f\u83cc\u5b50\u5be6\u9ad4', 'Galls \u816b\u7624 ', 'Girdling root \u7e8f\u7e5e\u6839 ', 'Heavy lateral limb \u91cd\u5074\u679d', 'Included bark \u5167\u593e\u6a39\u76ae', 'Parasitic or epiphytic plants \u5bc4\u751f\u6216\u9644\u751f\u690d\u7269', 'Pest and disease \u75c5\u87f2\u5bb3', 'Poor taper \u4e0d\u826f\u6f38\u5c16\u751f\u9577', 'Root-plate movement \u6839\u57fa\u79fb\u4f4d ', 'Sap flow \u6ef2\u6db2', 'Trunk girdling \u7e8f\u7e5e\u6a39\u5e79 ', 'Wounds or mechanical injury \u50b7\u75d5\u6216\u6a5f\u68b0\u7834\u640d'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 225 | | valid | 67 |
OttoYu/TreeConditionHK
[ "task_categories:image-classification", "language:en", "doi:10.57967/hf/0476", "region:us" ]
2023-03-24T05:51:36+00:00
{"language": ["en"], "task_categories": ["image-classification"]}
2023-03-26T10:10:42+00:00
0695d3fb47ae37d3f393468f6f8d8600dcb7f199
# Dataset Card for "flores200_packed2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bri25yu/flores200_packed2
[ "region:us" ]
2023-03-24T06:07:36+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 15086599195.0, "num_examples": 10240000}, {"name": "val", "num_bytes": 3827042, "num_examples": 5000}, {"name": "test", "num_bytes": 7670994, "num_examples": 10000}], "download_size": 6552366058, "dataset_size": 15098097231.0}}
2023-03-24T07:09:13+00:00
299e744f3bdfb4948f4296181fd6e749b7e0ba85
# Dataset Card for "gen.2.chorales" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lansinuote/gen.2.chorales
[ "region:us" ]
2023-03-24T06:13:01+00:00
{"dataset_info": {"features": [{"name": "data", "sequence": {"sequence": {"sequence": {"sequence": "float32"}}}}], "splits": [{"name": "train", "num_bytes": 9977988, "num_examples": 229}], "download_size": 33969, "dataset_size": 9977988}}
2023-03-24T06:13:10+00:00
86ea3fab5e18ec3783c454c08f630a613e65c9a0
# Dataset Card for "reward_model_anthropic" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Deojoandco/reward_model_anthropic
[ "region:us" ]
2023-03-24T06:26:26+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "toxicity", "dtype": "float64"}, {"name": "severe_toxicity", "dtype": "float64"}, {"name": "obscene", "dtype": "float64"}, {"name": "identity_attack", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "threat", "dtype": "float64"}, {"name": "sexual_explicit", "dtype": "float64"}, {"name": "max_toxity_key", "dtype": "string"}, {"name": "max_toxity_value", "dtype": "float64"}, {"name": "toxic", "dtype": "bool"}, {"name": "regard", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "regard_neutral", "dtype": "float64"}, {"name": "regard_negative", "dtype": "float64"}, {"name": "regard_positive", "dtype": "float64"}, {"name": "regard_other", "dtype": "float64"}, {"name": "bias_matches", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 16251153, "num_examples": 8552}, {"name": "train", "num_bytes": 304510601, "num_examples": 160800}], "download_size": 179966974, "dataset_size": 320761754}}
2023-03-24T14:19:15+00:00
fd17c74c746c632a9c595cdb90de967fdcfa2bef
# Dataset Card for "flores200_eng_input_scaffolding_mt5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bri25yu/flores200_eng_input_scaffolding_mt5
[ "region:us" ]
2023-03-24T06:33:57+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9706777237, "num_examples": 10240000}, {"name": "val", "num_bytes": 3827042, "num_examples": 5000}, {"name": "test", "num_bytes": 7670994, "num_examples": 10000}], "download_size": 4662177615, "dataset_size": 9718275273}}
2023-03-24T06:53:52+00:00
53c565fb81ed777c4b512542ba43a99787280a39
Jasper881108/api-zeroshot-summary
[ "license:openrail", "region:us" ]
2023-03-24T06:45:23+00:00
{"license": "openrail"}
2023-03-27T05:03:56+00:00
c0a72df79e08949b8b4490c64b8b86c95aa34da7
machineliker/Style1
[ "region:us" ]
2023-03-24T07:02:12+00:00
{}
2023-03-24T07:02:36+00:00
4f7d346729a96a71bd6a05fb68af48920b9c587c
# Dataset for Visual-Tactile Sensing for In-Hand Object Reconstruction [**Paper**](https://arxiv.org/pdf/2303.14498.pdf) | [**Project Page**](https://sites.google.com/view/vtaco) <br> This repository contains the dataset of the paper: **Visual-Tactile Sensing for In-Hand Object Reconstruction** Wenqiang Xu*, Zhenjun Yu*, Han Xue, Ruolin Ye, Siqiong Yao, Cewu Lu (* = Equal contribution) **CVPR 2023** ## Download By downloading the dataset into the repository of [VTacO](https://github.com/jeffsonyu/VTacO) under the folder './data', and it should be look like: ``` VTacO ├── data │ ├── VTacO_AKB_class │ │ │── 001 │ │ │ |── $class_name │ │ │ |── metadata.yaml │ │ │── 002 │ │ │── ... │ │ │── 007 ├── VTacO_YCB │ │ │── 003 │ │ │── metadata.yaml ├── VTacO_mesh │ │ │── mesh_obj │ │ │── depth_origin.txt ``` And you can begin training the VTacO and VTacOH by following instructions from [VTacO](https://github.com/jeffsonyu/VTacO).
robotflow/vtaco
[ "arxiv:2303.14498", "region:us" ]
2023-03-24T07:02:36+00:00
{}
2023-07-18T09:25:05+00:00
7de4c0ca802d3b14ce88e2916505eabd2efe1dc5
# Dataset Card for "flores200_eng_output_scaffolding_mt5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hlillemark/flores200_eng_output_scaffolding_mt5
[ "region:us" ]
2023-03-24T07:29:36+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 10718199156, "num_examples": 10240000}, {"name": "val", "num_bytes": 3827042, "num_examples": 5000}, {"name": "test", "num_bytes": 7670994, "num_examples": 10000}], "download_size": 4669652168, "dataset_size": 10729697192}}
2023-03-24T07:41:39+00:00
dc83be217d33e15568028b9a35ed6b22316d8fd4
# Dataset Card for "marathi_numbers-1-20" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SameerMahajan/marathi_numbers-1-20
[ "region:us" ]
2023-03-24T07:44:03+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12, "13": 13, "14": 14, "15": 15, "16": 16, "17": 17, "18": 18, "19": 19}}}}, {"name": "number", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 79901585.38, "num_examples": 1020}], "download_size": 6503225, "dataset_size": 79901585.38}}
2023-03-25T04:40:31+00:00
49492782b8291515b6abef00b583bbd3e8a00850
levalencia/TwitterHateSpeech
[ "license:cc0-1.0", "region:us" ]
2023-03-24T07:48:01+00:00
{"license": "cc0-1.0"}
2023-03-24T07:49:29+00:00
8dc8e077705017f53019b464338b44e4df1fb1f9
# Dataset Card for "common_voice_10_1_th_clean_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DylanonWic/common_voice_10_1_th_clean_test
[ "region:us" ]
2023-03-24T08:33:14+00:00
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "input_values", "sequence": "float32"}], "splits": [{"name": "validation", "num_bytes": 2996917184.000732, "num_examples": 10028}, {"name": "test", "num_bytes": 3187660791.5096064, "num_examples": 10160}], "download_size": 5739728701, "dataset_size": 6184577975.510338}}
2023-03-24T08:37:44+00:00
bbec738348f4ed3b6cb1b3e938e47fe6be28b69b
KoddaDuck/fleurs
[ "task_categories:automatic-speech-recognition", "size_categories:10M<n<100M", "language:zh", "license:apache-2.0", "region:us" ]
2023-03-24T08:48:34+00:00
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["10M<n<100M"], "task_categories": ["automatic-speech-recognition"]}
2023-04-15T01:28:36+00:00
dc4179f1015c6c99463bfd91b9353e590599be04
liuyue2024/test_whisper
[ "license:apache-2.0", "region:us" ]
2023-03-24T08:58:23+00:00
{"license": "apache-2.0"}
2023-03-24T08:58:23+00:00
81932f1af6697b27f36cb2819d6ddf314beff7e4
## 蔬菜图像数据集 ### 背景 最初的实验是用世界各地发现的15种常见蔬菜进行的。实验选择的蔬菜有:豆类、苦瓜、葫芦、茄子、西兰花、卷心菜、辣椒、胡萝卜、花椰菜、黄瓜、木瓜、土豆、南瓜、萝卜和番茄。共使用了来自15个类的21000张图像,其中每个类包含1400张尺寸为224×224、格式为*.jpg的图像。数据集中70%用于培训,15%用于验证,15%用于测试。 ### 目录 此数据集包含三个文件夹: - train (15000 张图像) - test (3000 张图像) - validation (3000 张图像) ### 数据收集 这个数据集中的图像是我们为一个项目从蔬菜农场和市场收集的。 ### 制作元数据文件 运行下面`python`的代码,就可以在桌面生成三个csv格式的元数据文件、一个分类数据文件(需要放入到数据文件中) ```python #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ 1.下载的数据文件 Vegetable Images.zip ,并解压到桌面 2.然后执行 python generate.py 即可生成三个元数据文件和一个分类数据文件 """ import os from pathlib import Path category_dict = { 'Bean': '豆类', 'Bitter_Gourd': '苦瓜', 'Bottle_Gourd': '葫芦', 'Brinjal': '茄子', 'Broccoli': '西兰花', 'Cabbage': '卷心菜', 'Capsicum': '辣椒', 'Carrot': '胡萝卜', 'Cauliflower': '花椰菜', 'Cucumber': '黄瓜', 'Papaya': '木瓜', 'Potato': '土豆', 'Pumpkin': '南瓜', 'Radish': '萝卜', 'Tomato': '番茄', } base_path = Path.home().joinpath('desktop') data = '\n'.join((item for item in category_dict.values())) # 注意:利用了python 3.6之后字典插入有序的特性 base_path.joinpath('classname.txt').write_text(data, encoding='utf-8') def create(filename): csv_path = base_path.joinpath(f'{filename}.csv') with csv_path.open('wt', encoding='utf-8', newline='') as csv: csv.writelines([f'image,category{os.linesep}']) data_path = base_path.joinpath('Vegetable Images', filename) batch = 0 datas = [] keys = list(category_dict.keys()) for image_path in data_path.rglob('*.jpg'): batch += 1 part1 = str(image_path).removeprefix(str(base_path)).replace('\\', '/')[1:] part2 = keys.index(image_path.parents[0].name) datas.append(f'{part1},{part2}{os.linesep}') if batch > 100: csv.writelines(datas) datas.clear() if datas: csv.writelines(datas) return csv_path.stat().st_size if __name__ == '__main__': print(create('train')) print(create('test')) print(create('validation')) ``` ### 致谢 非常感谢原始数据集提供方 [Vegetable Image Dataset](https://www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset)。 ### 克隆数据 ```bash git clone https://huggingface.co/datasets/cc92yy3344/vegetable.git ```
cc92yy3344/vegetable
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:zh", "license:apache-2.0", "蔬菜", "图像分类", "region:us" ]
2023-03-24T09:02:43+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["zh"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "15\u79cd\u852c\u83dc\u6570\u636e\u96c6", "tags": ["\u852c\u83dc", "\u56fe\u50cf\u5206\u7c7b"]}
2023-03-29T11:21:19+00:00
c0d46227bedf21fe9e07e66d590b62fb88ddfceb
# Dataset Card for "reentrancy_solidity_function" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nguyenminh871/reentrancy_solidity_function
[ "region:us" ]
2023-03-24T09:36:23+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "func", "dtype": "string"}, {"name": "target", "dtype": "bool"}, {"name": "project", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 840896, "num_examples": 3203}], "download_size": 156960, "dataset_size": 840896}}
2023-03-24T10:25:20+00:00
f9c23c3121a0b6454b390acffeeb1fe74793f140
# Dataset Card for "pengadilan_dataset_mp3_aug_preparedaa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yusufagung29/pengadilan_dataset_mp3_aug_preparedaa
[ "region:us" ]
2023-03-24T09:55:49+00:00
{"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "input_length", "dtype": "float64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 230528064, "num_examples": 240}, {"name": "test", "num_bytes": 57631400, "num_examples": 60}], "download_size": 49103627, "dataset_size": 288159464}}
2023-03-24T09:56:49+00:00
7d426231a55595457dbaecad5995d308af5fc193
# Dataset Card for "google_fleurs_plus_common_voice_11_ar" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MohammadJamalaldeen/google_fleurs_plus_common_voice_11_ar
[ "region:us" ]
2023-03-24T09:57:03+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2582102605.04, "num_examples": 40880}, {"name": "test", "num_bytes": 304561718.0, "num_examples": 10440}], "download_size": 0, "dataset_size": 2886664323.04}}
2023-03-24T10:03:21+00:00
640ffc92e241f05fce87d4165d1c5f2d97b95b9d
# Dataset Card for "jxner" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jarvisx17/Medical-NER-Dataset
[ "region:us" ]
2023-03-24T10:11:26+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-Medicine", "2": "I-Medicine", "3": "B-MedicalCondition", "4": "I-MedicalCondition", "5": "B-Pathogen", "6": "I-Pathogen"}}}}], "splits": [{"name": "train", "num_bytes": 21171, "num_examples": 16}, {"name": "validation", "num_bytes": 9008, "num_examples": 6}, {"name": "test", "num_bytes": 10686, "num_examples": 6}], "download_size": 23130, "dataset_size": 40865}}
2023-03-24T12:03:29+00:00
b4aac201f58c0c1614ee4526167438833ac2dae8
# Dataset Card for "common_voice_10_1_th_clean_split_0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DylanonWic/common_voice_10_1_th_clean_split_0_old
[ "region:us" ]
2023-03-24T10:23:18+00:00
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "input_values", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 13074645939.656857, "num_examples": 50670}], "download_size": 11878689391, "dataset_size": 13074645939.656857}}
2023-03-24T10:34:19+00:00
b448f622b078cf7a97bd54d11ae97ea1aabef690
# ParaDetox: Detoxification with Parallel Data (English). Content Task Results This repository contains information about **Content Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 2: Content Preservation Check**. Here, the samples with markup confidence >= 90 are present. One text in the pair is toxic, another -- its non-toxic paraphrase (should be). Totally, datasets contains 32,317 pairs. Among them, the minor part is negative examples (4,562 pairs). ## Citation ``` @inproceedings{logacheva-etal-2022-paradetox, title = "{P}ara{D}etox: Detoxification with Parallel Data", author = "Logacheva, Varvara and Dementieva, Daryna and Ustyantsev, Sergey and Moskovskiy, Daniil and Dale, David and Krotova, Irina and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.469", pages = "6804--6818", abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva ([email protected])
s-nlp/en_paradetox_content
[ "task_categories:text-classification", "language:en", "license:openrail++", "region:us" ]
2023-03-24T11:07:04+00:00
{"language": ["en"], "license": "openrail++", "task_categories": ["text-classification"]}
2023-09-08T07:38:03+00:00
a2f395d964219b82e3c0be363a083487e37f3790
# Dataset Card for "tib_03" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gigant/tib_03
[ "region:us" ]
2023-03-24T11:10:32+00:00
{"dataset_info": {"features": [{"name": "doi", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "video_url", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "release_year", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "contributors", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "transcript", "dtype": "string"}, {"name": "transcript_segments", "sequence": [{"name": "id", "dtype": "int32"}, {"name": "seek", "dtype": "int32"}, {"name": "start", "dtype": "float32"}, {"name": "end", "dtype": "float32"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "int32"}, {"name": "temperature", "dtype": "float32"}, {"name": "avg_logprob", "dtype": "float32"}, {"name": "compression_ratio", "dtype": "float32"}, {"name": "no_speech_prob", "dtype": "float32"}]}, {"name": "keyframes", "sequence": [{"name": "slide", "dtype": "string"}, {"name": "frames", "sequence": "int32"}, {"name": "timestamp", "sequence": "float32"}]}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 825021028.0243876, "num_examples": 7282}, {"name": "test", "num_bytes": 103212600.45732176, "num_examples": 911}, {"name": "valid", "num_bytes": 103099304.51829067, "num_examples": 910}], "download_size": 502108840, "dataset_size": 1031332933.0}}
2023-03-24T11:12:47+00:00
72487c7a8a6061637fd24c84a14079cbe38e04da
# Dataset Card for unarXive IMRaD classification ## Dataset Description * **Homepage:** [https://github.com/IllDepence/unarXive](https://github.com/IllDepence/unarXive) * **Paper:** [unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network](https://arxiv.org/abs/2303.14957) ### Dataset Summary The unarXive IMRaD classification dataset contains 530k paragraphs from computer science papers and the IMRaD section they originate from. The paragraphs are derived from [unarXive](https://github.com/IllDepence/unarXive). The dataset can be used as follows. ``` from datasets import load_dataset imrad_data = load_dataset('saier/unarXive_imrad_clf') imrad_data = imrad_data.class_encode_column('label') # assign target label column imrad_data = imrad_data.remove_columns('_id') # remove sample ID column ``` ## Dataset Structure ### Data Instances Each data instance contains the paragraph’s text as well as one of the labels ('i', 'm', 'r', 'd', 'w' — for Introduction, Methods, Results, Discussion and Related Work). An example is shown below. ``` {'_id': '789f68e7-a1cc-4072-b07d-ecffc3e7ca38', 'label': 'm', 'text': 'To link the mentions encoded by BERT to the KGE entities, we define ' 'an entity linking loss as cross-entropy between self-supervised ' 'entity labels and similarities obtained from the linker in KGE ' 'space:\n' '\\(\\mathcal {L}_{EL}=\\sum -\\log \\dfrac{\\exp (h_m^{proj}\\cdot ' '\\textbf {e})}{\\sum _{\\textbf {e}_j\\in \\mathcal {E}} \\exp ' '(h_m^{proj}\\cdot \\textbf {e}_j)}\\) \n'} ``` ### Data Splits The data is split into training, development, and testing data as follows. * Training: 520,053 instances * Development: 5000 instances * Testing: 5001 instances ## Dataset Creation ### Source Data The paragraph texts are extracted from the data set [unarXive](https://github.com/IllDepence/unarXive). #### Who are the source language producers? The paragraphs were written by the authors of the arXiv papers. In file `license_info.jsonl` author and text licensing information can be found for all samples, An example is shown below. ``` {'authors': 'Yusuke Sekikawa, Teppei Suzuki', 'license': 'http://creativecommons.org/licenses/by/4.0/', 'paper_arxiv_id': '2011.09852', 'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8', '18dc073e-a48e-488e-b34c-e5fc3cb8a4ca', '0c2e89b3-d863-4bc2-9e11-8f6c48d867cb', 'd85e46cf-b11d-49b6-801b-089aa2dd037d', '92915cea-17ab-4a98-aad2-417f6cdd53d2', 'e88cb422-47b7-4f69-9b0b-fbddf8140d98', '4f5094a4-0e6e-46ae-a34d-e15ce0b9803c', '59003494-096f-4a7c-ad65-342b74eed561', '6a99b3f5-217e-4d3d-a770-693483ef8670']} ``` ### Annotations Class labels were automatically determined ([see implementation](https://github.com/IllDepence/unarXive/blob/master/src/utility_scripts/ml_tasks_prep_data.py)). ## Considerations for Using the Data ### Discussion and Biases Because only paragraphs unambiguously assignable to one of the IMRaD classeswere used, a certain selection bias is to be expected in the data. ### Other Known Limitations Depending on authors’ writing styles as well LaTeX processing quirks, paragraphs can vary in length a significantly. ## Additional Information ### Licensing information The dataset is released under the Creative Commons Attribution-ShareAlike 4.0. ### Citation Information ``` @inproceedings{Saier2023unarXive, author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael}, title = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}}, booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries}, year = {2023}, series = {JCDL '23} } ```
saier/unarXive_imrad_clf
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|10.5281/zenodo.7752615", "language:en", "license:cc-by-sa-4.0", "arXiv.org", "arXiv", "IMRaD", "publication", "paper", "preprint", "section", "physics", "mathematics", "computer science", "cs", "arxiv:2303.14957", "region:us" ]
2023-03-24T11:30:56+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|10.5281/zenodo.7752615"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "unarXive IMRaD classification", "tags": ["arXiv.org", "arXiv", "IMRaD", "publication", "paper", "preprint", "section", "physics", "mathematics", "computer science", "cs"], "dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 451908280, "num_examples": 520053}, {"name": "test", "num_bytes": 4650429, "num_examples": 5000}, {"name": "validation", "num_bytes": 4315597, "num_examples": 5001}], "download_size": 482376743, "dataset_size": 460874306}}
2023-04-01T23:56:43+00:00
c303392c0cf5df2d57b08c274ff7ae961b87ca8a
# Dataset Card for "google_fleurs_ar" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MohammadJamalaldeen/google_fleurs_ar
[ "region:us" ]
2023-03-24T11:49:18+00:00
{"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2021551248, "num_examples": 2104}, {"name": "test", "num_bytes": 411235560, "num_examples": 428}], "download_size": 901727231, "dataset_size": 2432786808}}
2023-03-24T11:51:18+00:00
d255a28374cfa2b671ce9cc5f2ae9b241c6365d9
BlodyTraveler/4x-UltraSharp
[ "license:unknown", "region:us" ]
2023-03-24T12:12:34+00:00
{"license": "unknown"}
2023-03-24T12:14:03+00:00
df769ff9525094b4fe6700582805a856f5b21256
# Dataset Card for unarXive citation recommendation ## Dataset Description * **Homepage:** [https://github.com/IllDepence/unarXive](https://github.com/IllDepence/unarXive) * **Paper:** [unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network](https://arxiv.org/abs/2303.14957) ### Dataset Summary The unarXive citation recommendation dataset contains 2.5 Million paragraphs from computer science papers and with an annotated citation marker. The paragraphs and citation information is derived from [unarXive](https://github.com/IllDepence/unarXive). Note that citation infromation is only given as the [OpenAlex](https://openalex.org/) ID of the cited paper. An important consideration for models is therefore if the data is used *as is*, or if additional information of the cited papers (metadata, abstracts, full-text, etc.) is used. The dataset can be used as follows. ``` from datasets import load_dataset citrec_data = load_dataset('saier/unarXive_citrec') citrec_data = citrec_data.class_encode_column('label') # assign target label column citrec_data = citrec_data.remove_columns('_id') # remove sample ID column ``` ## Dataset Structure ### Data Instances Each data instance contains the paragraph’s text as well as information on one of the contained citation markers, in the form of a label (cited document OpenAlex ID), citation marker, and citation marker offset. An example is shown below. ``` {'_id': '7c1464bb-1f0f-4b38-b1a3-85754eaf6ad1', 'label': 'https://openalex.org/W3115081393', 'marker': '[1]', 'marker_offsets': [[316, 319]], 'text': 'Data: For sentiment analysis on Hindi-English CM tweets, we used the ' 'dataset provided by the organizers of Task 9 at SemEval-2020.\n' 'The training dataset consists of 14 thousand tweets.\n' 'Whereas, the validation dataset as well as the test dataset contain ' '3 thousand tweets each.\n' 'The details of the dataset are given in [1]}.\n' 'For this task, we did not use any external dataset.\n'} ``` ### Data Splits The data is split into training, development, and testing data as follows. * Training: 2,043,192 instances * Development: 225,084 instances * Testing: 225,348 instances ## Dataset Creation ### Source Data The paragraph texts are extracted from the data set [unarXive](https://github.com/IllDepence/unarXive). #### Who are the source language producers? The paragraphs were written by the authors of the arXiv papers. In file `license_info.jsonl` author and text licensing information can be found for all samples, An example is shown below. ``` {'authors': 'Yusuke Sekikawa, Teppei Suzuki', 'license': 'http://creativecommons.org/licenses/by/4.0/', 'paper_arxiv_id': '2011.09852', 'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8', '18dc073e-a48e-488e-b34c-e5fc3cb8a4ca', '0c2e89b3-d863-4bc2-9e11-8f6c48d867cb', 'd85e46cf-b11d-49b6-801b-089aa2dd037d', '92915cea-17ab-4a98-aad2-417f6cdd53d2', 'e88cb422-47b7-4f69-9b0b-fbddf8140d98', '4f5094a4-0e6e-46ae-a34d-e15ce0b9803c', '59003494-096f-4a7c-ad65-342b74eed561', '6a99b3f5-217e-4d3d-a770-693483ef8670']} ``` ### Annotations Citation information in unarXive is automatically determined ([see implementation](https://github.com/IllDepence/unarXive/blob/master/src/match_references_openalex.py)). <!-- ## Considerations for Using the Data ### Discussion and Biases TODO ### Other Known Limitations TODO --> ## Additional Information ### Licensing information The dataset is released under the Creative Commons Attribution-ShareAlike 4.0. ### Citation Information ``` @inproceedings{Saier2023unarXive, author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael}, title = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}}, booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries}, year = {2023}, series = {JCDL '23} } ```
saier/unarXive_citrec
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|10.5281/zenodo.7752615", "language:en", "license:cc-by-sa-4.0", "arXiv.org", "arXiv", "citation recommendation", "citation", "reference", "publication", "paper", "preprint", "section", "physics", "mathematics", "computer science", "cs", "arxiv:2303.14957", "region:us" ]
2023-03-24T12:13:20+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|10.5281/zenodo.7752615"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "unarXive citation recommendation", "tags": ["arXiv.org", "arXiv", "citation recommendation", "citation", "reference", "publication", "paper", "preprint", "section", "physics", "mathematics", "computer science", "cs"], "dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "marker", "dtype": "string"}, {"name": "marker_offsets", "sequence": {"sequence": "int64"}}, {"name": "label", "dtype": "string"}], "config_name": ".", "splits": [{"name": "train", "num_bytes": 5457336094, "num_examples": 2043192}, {"name": "test", "num_bytes": 551012459, "num_examples": 225084}, {"name": "validation", "num_bytes": 586422261, "num_examples": 225348}], "download_size": 7005370567, "dataset_size": 6594770814}}
2023-04-02T00:28:05+00:00
6d057b492632240a3815c9f22a297539df060ee5
# Dataset Card for "common_voice_10_1_th_clean_split_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DylanonWic/common_voice_10_1_th_clean_split_1_old
[ "region:us" ]
2023-03-24T12:22:25+00:00
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "input_values", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 13015571937.874308, "num_examples": 50300}], "download_size": 11817800447, "dataset_size": 13015571937.874308}}
2023-03-24T12:32:45+00:00
d3c87d65207b9f4bbf112b532977b43b9e72af8e
# ParaDetox: Detoxification with Parallel Data (English). Paraphrase Task Negative Results This repository contains information about **Paraphrase Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline. In this dataset, the samples that were marked as *"cannot rewrite"* are present. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 1: Generation of Paraphrases**. The general size of the dataset is about 12,059 samples. Here, the samples that were marked by annotators that they cannot detoxify are present. The reason for this can be following: * *non-toxic*: the text is simply non toxic, can be with negative sentiment, however, without any obscene or rude lexicon; * *toxic content*: the text is passive aggressive, sarcastic, or other, so the insult is deeply incorporated in the message. To detoxify it, you need to change the meaning dramantically. * *unclear*: the text is only about obscene lexicon, random words, or any other tokens combination that makes it difficult to understand the main content. Annotators could select several options. ## Citation ``` @inproceedings{logacheva-etal-2022-paradetox, title = "{P}ara{D}etox: Detoxification with Parallel Data", author = "Logacheva, Varvara and Dementieva, Daryna and Ustyantsev, Sergey and Moskovskiy, Daniil and Dale, David and Krotova, Irina and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.469", pages = "6804--6818", abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva ([email protected])
s-nlp/en_non_detoxified
[ "task_categories:text-classification", "language:en", "license:openrail++", "region:us" ]
2023-03-24T13:06:46+00:00
{"language": ["en"], "license": "openrail++", "task_categories": ["text-classification"]}
2023-09-08T07:38:22+00:00
8b3fe67f8595f7123d970f1f69c8930bcc8bea8b
# Dataset Card for "somos-clean-alpaca-es" Este conjunto de datos es una traducción del dataset Clean Alpaca al Español y sirve como referencia para el esfuerzo colaborativo de limpieza y mejora del dataset durante el [Hackathon Somos NLP 2023](https://somosnlp.org/hackathon). *Nota: No es necesario participar en el hackathon para contribuir a esta tarea.* Cuantas más personas y equipos participen mayor será la calidad del dataset final y por lo tanto también del LLM que entrenemos, ¡únete! Te explicamos como participar: > **[Video explicativo (10 mins) | Daniel @Argilla](https://www.youtube.com/watch?v=Q-2qsvOEgnA)** > **[Artículo "Ayuda a mejorar los LLM de AI en español en 7 sencillos pasos" | Carlos @Platzi](https://platzi.com/blog/ayuda-a-mejorar-los-llm-en-espanol-en-7-sencillos-pasos/)** Estamos a tu disponibilidad en el **[canal #alpaca-es](https://discord.com/invite/my8w7JUxZR)** del servidor de Discord de Somos NLP. ## 🔥 El reto A continuación se describen los pasos y normas para participar: 1. Se debe utilizar este conjunto de datos como punto de partida y mantener tanto los `ids` como la estructura. Esto es así para poder realizar tareas posteriores de validación cruzada y mejoras programáticas del dataset final. 2. Se trata de un dataset en formato compatible con Argilla. Cada equipo o persona que quiera participar, puede trabajar con su propia instancia de Argilla. Una forma fácil de empezar es duplicar el Space que hemos creado para el reto. En la sección de abajo encontrarás como hacerlo. 3. Argilla se puede utilizar para validar y etiquetar manualmente y usando búsquedas y similitud semántica desde la UI. Para ello se pondrán ejemplos de uso del lenguaje de búsqueda en esta página, pero se recomienda consultar [la guía de uso](https://docs.argilla.io/en/latest/guides/query_datasets.html). 4. La validación humana es necesaria para garantizar la calidad final pero se pueden realizar también limpiezas programáticas para aquellos casos en los que sea más eficiente. En cualquier caso, para el éxito del experimento se deberán utilizar las etiquetas propuestas, aunque se modifique programáticamente el dataset. 5. No se deben borrar registros del dataset, si un registro es inválido se deberá indicar en la etiqueta (por ejemplo `BAD INPUT`) o con el status `discard`. 6. Antes de empezar a anotar, es necesario leer la [guía de anotación](guia-de-anotacion.md) al completo. El resultado del reto será un dataset por persona o equipo que contenga el dataset original etiquetado parcialmente, y opcionalmente otras versiones/subconjuntos del dataset con los datos corregidos, mejorados o aumentados. En estos casos es conveniente mantener un dataset a parte con los ids originales. Al finalizar combinaremos todas las versiones etiquetadas para conseguir un dataset de calidad. ## ✅ Cómo empezar a etiquetar Para etiquetar el dataset tienes que: 1. Lanzar tu Argilla Space siguiendo [este link](https://huggingface.co/spaces/somosnlp/somos-alpaca-es?duplicate=true). Esto te guiará para crear una instancia de Argilla en el Hub que cargará automaticamente el dataset (ver captura de pantalla abajo). **IMPORTANTE**: que el Space sea Public para poder leer los datos etiquetados desde Python. El proceso de carga puede tardar hasta 10 minutos, puedes consultar los logs para comprobar que se están cargando los datos. 2. **IMPORTANTE:** Si se quiere sincronizar los datos validados con el Hub para no perder las anotaciones si se reinicia el Space, hay que configurar dos secrets (en Settings del Space): `HF_TOKEN` que es [vuestro token de escritura](https://huggingface.co/settings/tokens), y `HUB_DATASET_NAME` que es el dataset donde queréis guardarlo, importante incluir la organizacion o persona seguido de un / y el nombre del dataset. Por ejemplo `juanmartinez/somos-clean-alpaca-es-validations` o `miempresa/somos-clean-alpaca-es-validations`. 3. El usuario y contraseña es `argilla` / `1234`. Mientras se carga tu Argilla Space con el dataset puedes aprovechar para leer las guías de anotación. 4. Aunque en principio se va sincronizar el dataset anotado, recomendamos que abras Colab o un notebook en local y que guardes el dataset periodicamente en un dataset del Hub (puede ser en tu espacio personal o tu organización). Para ello recomendamos leer el apartado como guardar el dataset en el Hub. Se recomienda mirar el log del Space para ver si hay errores a la hora de configurar los Secret `HF_TOKEN` y `HUB_DATASET_NAME`. ![Duplicar Space](duplicar-space.png) ## 🚀 Desplegar Argilla localmente o en un servidor cloud Para equipos que tengan el tiempo y quieran desplegar una versión con más capacidad de computación y estabilidad que Spaces, [aquí hay una guía explicativa](https://docs.argilla.io/en/latest/getting_started/installation/deployments/deployments.html). Una vez instalada, se deben subir los datos con [este notebook](https://colab.research.google.com/drive/1KyikSFeJe6_lQNs-9cHveIOGM99ENha9#scrollTo=jbfdRoRVXTW6). ## ✍️ Guías de anotación Antes de empezar a anotar, es necesario leer la [guía de anotación](guia-de-anotacion.md) al completo. ## 💾 IMPORTANTE: Guardar el dataset en el Hub periodicamente Aunque se ha configurado el Space para que se sincronice con un dataset del Hub a vuestra elección, para tener más seguridad se recomienda guardar una copia del dataset en el Hub ejecutando el siguiente código. Es necesario hacer login con Python usando `from huggingface_hub import notebook_login` o añadir el token directamente al hacer el push_to_hub: ```python import argilla as rg # usar rg.init() para definir la API_URL (la direct URL de tu Space de Argilla) y API_KEY rg.init( api_url="https://tu-space-de-argilla.hf.space", api_key="team.apikey" ) # Leer dataset con validaciones de Argilla rg_dataset = rg.load("somos-clean-alpaca-es-team", query="status:Validated") # Transformar a formato datasets dataset = rg_dataset.to_datasets() # Publicar en el Hub, puedes usar cualquier nombre de dataset que elijas dataset.push_to_hub("somos-clean-alpaca-es", token="TU TOKEN WRITE EN SETTINGS HUB. NO NECESARIO SI HAS HECHO LOGIN") ``` Una vez hecho esto se puede recuperar el dataset y volver a cargar en Argilla con el notebook de "Cómo cargar el dataset en Argilla". ## 🔎 Ejemplos de consultas y trucos para etiquetar Se recomienda comenzar explorando y etiquetando el dataset de manera secuencial para entender la estructura e ir identificando patrones. Una vez hecho esto se recomienda combinarlo con las siguientes herramientas: ### Utilizar el buscador Tanto con palabras clave, como con expresiones regulares, y wildcards y expresiones booleanas, ver [la guía de uso](https://docs.argilla.io/en/latest/guides/query_datasets.html). Un aspecto interesante es la capacidad de buscar solo en determinados campos. Para ello, hay que utilizar la siguiente sintaxis `inputs.nombre_del_campo:"consulta"`: Por ejemplo: `inputs.1-instruction:"Crear una página"` encontraría todos aquellos registros con este texto en la instrucción. Además, esto se puede combinar con expresiones booleanas para buscar en varios campos: `inputs.1-instruction:"Crear una página" AND inputs.3-output:"html"` Otro ejemplos: Encontrar frases de instrucción en Inglés: `inputs.1-instruction:Edit the following sentence` encuentra más de 100 instrucciones inválidas. ### Find similar Cuando encontramos patrones interesantes o erróneos en un registro y campo, podemos usar el botón find similar para encontrar ejemplos similares gracias al uso de similarity search usando embeddings. ### Etiquetado en lote (bulk) Si encontramos un patrón muy claro, podemos revisar los ejemplos más rápido y anotarlos en bloque usando la barra superior, debajo del buscador. Si hay mucho ejemplos se puede aumentar el número de registros por página. Se recomienda en cualquier caso revisar los ejemplos. ## ✨ Hackathon Somos NLP 2023 - No es necesario participar en el hackathon para unirse a esta tarea colaborativa. - Los equipos que participen en el hackathon pueden utilizar su versión etiquetada de este dataset para su proyecto. - Las versiones etiquetadas de este dataset serán elegibles para ganar la mención de honor al mejor dataset etiquetado. ## 🙌 Agradecimientos Muchas gracias a `versae` del proyecto BERTIN por la traducción del dataset, a `dvilasuero` y `nataliaElv` de Argilla por crear la documentación y resolver todas las dudas de las personas participantes, a `alarcon7a` de Platzi por escribir el artículo de blog, y a `mariagrandury` de Somos NLP por coordinar e integrar el reto en el hackathon. Al combinar las versiones y crear el dataset final mencionaremos a todas las personas que hayan participado en este esfuerzo 🤗
somosnlp/somos-clean-alpaca-es
[ "region:us" ]
2023-03-24T13:09:28+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "tr-flag-1-instruction", "dtype": "bool"}, {"name": "tr-flag-2-input", "dtype": "bool"}, {"name": "tr-flag-3-output", "dtype": "bool"}]}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 985217294, "num_examples": 51942}], "download_size": 651888026, "dataset_size": 985217294}}
2023-04-05T14:00:28+00:00
d7fd7c80810c7e5e04b92a2749505b28febd8b04
# Dataset Card for "salt_m2e_15_3_instruction" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/salt_m2e_15_3_instruction
[ "region:us" ]
2023-03-24T13:24:59+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 357089384, "num_examples": 1736400}, {"name": "validation", "num_bytes": 384605, "num_examples": 2500}, {"name": "test", "num_bytes": 393284, "num_examples": 2500}], "download_size": 218590021, "dataset_size": 357867273}}
2023-03-24T13:40:53+00:00
42ef7a2c71619946295532c051e391a058f97c32
metaeval/mega-acceptability-v2
[ "license:apache-2.0", "region:us" ]
2023-03-24T13:53:00+00:00
{"license": "apache-2.0"}
2023-03-24T13:53:12+00:00
a25a71934cdb6382b165a3ee9a3baf6a766ad1c0
# Student performance The [Student performance dataset](https://www.kaggle.com/datasets/ulrikthygepedersen/student_performances) from Kaggle. | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-----------------------------------------------------------------| | encoding | | Encoding dictionary showing original values of encoded features.| | math | Binary classification | Has the student passed the math exam? | | writing | Binary classification | Has the student passed the writing exam? | | reading | Binary classification | Has the student passed the reading exam? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/student_performance", "math")["train"] ``` # Features |**Feature** |**Type** | |-----------------------------------|-----------| |`is_male` |`bool` | |`ethnicity` |`string` | |`parental_level_of_education` |`int8` | |`has_standard_lunch` |`bool` | |`has_completed_preparation_test` |`bool` | |`reading_score` |`int64` | |`writing_score` |`int64` | |`math_score` |`int64` |
mstz/student_performance
[ "task_categories:tabular-classification", "size_categories:n<1K", "language:en", "license:cc", "student performance", "tabular_classification", "binary_classification", "region:us" ]
2023-03-24T13:53:31+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Student Performance", "tags": ["student performance", "tabular_classification", "binary_classification"], "configs": ["encoding", "math", "writing", "reading"]}
2023-04-07T13:54:45+00:00
b1e48da9fb0cc4ecc2d0430ad08fff183a4b8475
# Dataset Card for "marathi-numbers-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sanchit-gandhi/marathi-numbers-test
[ "region:us" ]
2023-03-24T13:57:11+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12, "13": 13, "14": 14, "15": 15, "16": 16, "17": 17, "18": 18, "19": 19}}}}, {"name": "number", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 79901585.38, "num_examples": 1020}], "download_size": 6503225, "dataset_size": 79901585.38}}
2023-03-24T14:28:44+00:00
6807e0b22eca7f9a8a3903ea673b31a115837464
Redistribution of data from https://www.sciencebase.gov/catalog/item/573ccf18e4b0dae0d5e4b109. Some files renamed for consistency. Corrupted or missing files replaced with data from https://landsat.usgs.gov/landsat-7-cloud-cover-assessment-validation-data. Landsat Data Distribution Policy: https://www.usgs.gov/media/files/landsat-data-distribution-policy
torchgeo/l7irish
[ "task_categories:image-segmentation", "size_categories:n<1K", "license:cc0-1.0", "climate", "region:us" ]
2023-03-24T13:59:18+00:00
{"license": "cc0-1.0", "size_categories": ["n<1K"], "task_categories": ["image-segmentation"], "pretty_name": "L7 Irish", "tags": ["climate"]}
2023-06-14T12:58:43+00:00
80449344fc00162741de39bceee8fce0d250eb33
# AutoTrain Dataset for project: leaf ## Dataset Description This dataset has been automatically processed by AutoTrain for project leaf. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<256x256 RGB PIL image>", "target": 4 }, { "image": "<256x256 RGB PIL image>", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['Bacteria', 'Fungi', 'Nematodes', 'Normal', 'Virus'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 191 | | valid | 48 |
OttoYu/LeafCondition
[ "task_categories:image-classification", "region:us" ]
2023-03-24T14:02:52+00:00
{"task_categories": ["image-classification"]}
2023-03-24T14:29:18+00:00
3ef5c9f461f700d129cfee00efe5a580b5950ddf
# LibriSpeech-Finetuning for VALL-E Included is a dataset I've prepared for training with [my fork of a VALL-E implementation](https://git.ecker.tech/mrq/vall-e), sourced from [LibriSpeech-Finetuning](https://dl.fbaipublicfiles.com/librilight/data/librispeech_finetuning.tgz). >\> What makes this different? I've trimmed them down to better train against them, as too large of a piece of data will increase VRAM use drastically: * I re-transcribed using [m-bain/WhisperX](https://github.com/m-bain/whisperX/)'s large-v2 model and using the VAD filter to get near-perfect timestamps. * I then bias the start by -0.05 seconds, and the ends by 0.05 seconds). * very short segments are merged with preceding ones to avoid fragmenting too much * the source audio is then sliced according to each segment, and each segment gets phonemized using [bootphon/phonemizer](https://github.com/bootphon/phonemizer/) (espeak backend). * finally, the sliced audio is quantized using Encodec, for VALL-E's use. This will help alleviate problems from the default `max_phoneme` length ignoring a large chunk of the dataset, and relatively evenly distributing lengths.
ecker/libritts-small
[ "region:us" ]
2023-03-24T14:05:18+00:00
{}
2023-03-24T14:24:16+00:00
7fc4689a33c9e1c1605125841e8164e03b910c56
# Dataset Card for "somos-alpaca-es-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cariai/somos-alpaca-es-2
[ "region:us" ]
2023-03-24T14:09:48+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 985053979, "num_examples": 52002}], "download_size": 0, "dataset_size": 985053979}}
2023-04-09T23:00:58+00:00
fc1a6656d6930eebe98beffc6321f7f07b34f77f
# Dataset Card for "common_voice_10_1_th_clean_split_2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DylanonWic/common_voice_10_1_th_clean_split_2_old
[ "region:us" ]
2023-03-24T14:11:17+00:00
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "input_values", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 13050526359.943853, "num_examples": 50594}], "download_size": 11872946207, "dataset_size": 13050526359.943853}}
2023-03-24T14:22:22+00:00
b6cda903fe8791659a0eff9aebf43d6a34fe9c9f
ztla/M4singer
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2023-03-24T14:19:50+00:00
{"license": "cc-by-nc-sa-4.0"}
2023-03-24T14:19:50+00:00
01c05c8bb326b3c166648723d073bb0724ef458d
# ParaDetox: Detoxification with Parallel Data (English). Toxicity Task Results This repository contains information about **Toxicity Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline. The original paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/) was presented at ACL 2022 main conference. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 3: Toxicity Check**. Here, the samples with markup confidence >= 90 are present. The input here is text and the label shows if the text is toxic or not. Totally, datasets contains 26,507 samples. Among them, the minor part is toxic examples (4,009 pairs). ## Citation ``` @inproceedings{logacheva-etal-2022-paradetox, title = "{P}ara{D}etox: Detoxification with Parallel Data", author = "Logacheva, Varvara and Dementieva, Daryna and Ustyantsev, Sergey and Moskovskiy, Daniil and Dale, David and Krotova, Irina and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.469", pages = "6804--6818", abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva ([email protected])
s-nlp/en_paradetox_toxicity
[ "task_categories:text-classification", "language:en", "license:openrail++", "region:us" ]
2023-03-24T14:24:58+00:00
{"language": ["en"], "license": "openrail++", "task_categories": ["text-classification"]}
2023-09-08T07:37:06+00:00
dcfd1e35d1f7d5a9848c25cd4e729e879b7c3eea
# Dataset Card for "pip-external" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/pip-external
[ "region:us" ]
2023-03-24T14:32:07+00:00
{"dataset_info": {"features": [{"name": "day", "dtype": "string"}, {"name": "num_downloads", "dtype": "int64"}], "splits": [{"name": "pytorch", "num_bytes": 35596, "num_examples": 1618}, {"name": "langchain", "num_bytes": 10516, "num_examples": 478}, {"name": "tensorflow", "num_bytes": 35596, "num_examples": 1618}, {"name": "openai", "num_bytes": 26422, "num_examples": 1201}], "download_size": 63147, "dataset_size": 108130}, "configs": [{"config_name": "default", "data_files": [{"split": "langchain", "path": "data/langchain-*"}, {"split": "pytorch", "path": "data/pytorch-*"}, {"split": "tensorflow", "path": "data/tensorflow-*"}]}]}
2024-02-15T11:18:42+00:00
ee225ebd3a436ba51e3f1d29df4bd156790e5de7
# Heart failure The [Heart failure dataset](https://www.kaggle.com/datasets/andrewmvd/heart-failure-clinical-data) from Kaggle. Predict patient death from earth failure given some personal medical data . # Configurations and tasks | **Configuration** | **Task** | **Description** | |-------------------|---------------------------|-----------------------------------------------------------------| | death | Binary classification | Did the patient die? | # Usage ```python from datasets import load_dataset dataset = load_dataset("mstz/heart_failure", "death")["train"] ``` # Features |**Feature** |**Type** | |----------------------------------------------------|-----------| |`age` |`int8` | |`has_anaemia` |`int8` | |`creatinine_phosphokinase_concentration_in_blood` |`float64` | |`has_diabetes` |`int8` | |`heart_ejection_fraction` |`float64` | |`has_high_blood_pressure` |`int8` | |`platelets_concentration_in_blood` |`float64` | |`serum_creatinine_concentration_in_blood` |`float64` | |`serum_sodium_concentration_in_blood` |`float64` | |`sex` |`int8` | |`is_smoker` |`int8` | |`days_in_study` |`int64` |
mstz/heart_failure
[ "task_categories:tabular-classification", "size_categories:n<1K", "language:en", "license:cc", "heart failure", "tabular_classification", "binary_classification", "UCI", "region:us" ]
2023-03-24T14:32:59+00:00
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["tabular-classification"], "pretty_name": "Heart failure", "tags": ["heart failure", "tabular_classification", "binary_classification", "UCI"], "configs": ["death"]}
2023-04-16T16:31:15+00:00
40d5a36d5d0eb4dfeffb43bb5e84cd82727f6bf1
# ParaDetox: Detoxification with Parallel Data (Russian). Paraphrase Task Negative Results This repository contains information about **Paraphrase Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 1: Generation of Paraphrases**. The general size of the dataset is about 11,446 samples. Here, the samples that were marked by annotators that they cannot detoxify are present. The reason for this can be following: * *non-toxic*: the text is simply non toxic, can be with negative sentiment, however, without any obscene or rude lexicon; * *toxic content*: the text is passive aggressive, sarcastic, or other, so the insult is deeply incorporated in the message. To detoxify it, you need to change the meaning dramantically. * *unclear*: the text is only about obscene lexicon, random words, or any other tokens combination that makes it difficult to understand the main content. Annotators could select several options. ## Citation ``` @inproceedings{logacheva-etal-2022-study, title = "A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification", author = "Logacheva, Varvara and Dementieva, Daryna and Krotova, Irina and Fenogenova, Alena and Nikishina, Irina and Shavrina, Tatiana and Panchenko, Alexander", booktitle = "Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.humeval-1.8", doi = "10.18653/v1/2022.humeval-1.8", pages = "90--101", abstract = "It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters.We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva ([email protected])
s-nlp/ru_non_detoxified
[ "task_categories:text-classification", "language:ru", "license:openrail++", "region:us" ]
2023-03-24T14:48:00+00:00
{"language": ["ru"], "license": "openrail++", "task_categories": ["text-classification"]}
2023-09-08T07:36:46+00:00
bacf3a00490d8b4fc127dc1915339f097364c34b
emrevoid/test
[ "license:gpl-3.0", "region:us" ]
2023-03-24T14:57:49+00:00
{"license": "gpl-3.0"}
2023-03-24T14:57:49+00:00
a522a64432f8389cd7f55065f092ce9e30eef6e3
# ParaDetox: Detoxification with Parallel Data (Russian). Content Task Results This repository contains information about **Content Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 2: Content Preservation Check**. Here, the samples with markup confidence >= 90 are present. One text in the pair is toxic, another -- its non-toxic paraphrase (should be). Totally, datasets contains 10,975 pairs. Among them, the minor part is negative examples (2,812 pairs). ## Citation ``` @inproceedings{logacheva-etal-2022-study, title = "A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification", author = "Logacheva, Varvara and Dementieva, Daryna and Krotova, Irina and Fenogenova, Alena and Nikishina, Irina and Shavrina, Tatiana and Panchenko, Alexander", booktitle = "Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.humeval-1.8", doi = "10.18653/v1/2022.humeval-1.8", pages = "90--101", abstract = "It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters.We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva ([email protected])
s-nlp/ru_paradetox_content
[ "task_categories:text-classification", "language:ru", "license:openrail++", "region:us" ]
2023-03-24T15:00:38+00:00
{"language": ["ru"], "license": "openrail++", "task_categories": ["text-classification"]}
2023-09-08T07:36:21+00:00
914af2bfc47bb6e0db69801a4e666e61b062eff8
# Dataset Card for "moroccan_darija_wikipedia_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AbderrahmanSkiredj1/moroccan_darija_wikipedia_dataset
[ "region:us" ]
2023-03-24T15:05:57+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8104410, "num_examples": 4862}], "download_size": 3229966, "dataset_size": 8104410}}
2023-03-24T15:05:59+00:00
971247cb826b48d192b184dd33cd871ed5bdb8fe
# Dataset Card for "small-coco-wm_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RIW/small-coco-wm_10
[ "region:us" ]
2023-03-24T15:07:14+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "key", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "error_message", "dtype": "null"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "original_width", "dtype": "int64"}, {"name": "original_height", "dtype": "int64"}, {"name": "exif", "dtype": "string"}, {"name": "sha256", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9528180597.872, "num_examples": 99652}, {"name": "validation", "num_bytes": 9091548317.436, "num_examples": 99694}], "download_size": 9948253256, "dataset_size": 18619728915.308}}
2023-03-24T16:26:33+00:00
73a8638db7da45c38343ae903897d1499e7e5d04
# ParaDetox: Detoxification with Parallel Data (Russian). Toxicity Task Results This repository contains information about **Toxicity Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline. ## ParaDetox Collection Pipeline The ParaDetox Dataset collection was done via [Yandex.Toloka](https://toloka.yandex.com/) crowdsource platform. The collection was done in three steps: * *Task 1:* **Generation of Paraphrases**: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content. * *Task 2:* **Content Preservation Check**: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings. * *Task 3:* **Toxicity Check**: Finally, we check if the workers succeeded in removing toxicity. Specifically this repo contains the results of **Task 3: Toxicity Check**. Here, the samples with markup confidence >= 90 are present. The input here is text and the label shows if the text is toxic or not. Totally, datasets contains 6,354 samples. Among them, the minor part is toxic examples (1,506 pairs). ## Citation ``` @inproceedings{logacheva-etal-2022-study, title = "A Study on Manual and Automatic Evaluation for Text Style Transfer: The Case of Detoxification", author = "Logacheva, Varvara and Dementieva, Daryna and Krotova, Irina and Fenogenova, Alena and Nikishina, Irina and Shavrina, Tatiana and Panchenko, Alexander", booktitle = "Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.humeval-1.8", doi = "10.18653/v1/2022.humeval-1.8", pages = "90--101", abstract = "It is often difficult to reliably evaluate models which generate text. Among them, text style transfer is a particularly difficult to evaluate, because its success depends on a number of parameters.We conduct an evaluation of a large number of models on a detoxification task. We explore the relations between the manual and automatic metrics and find that there is only weak correlation between them, which is dependent on the type of model which generated text. Automatic metrics tend to be less reliable for better-performing models. However, our findings suggest that, ChrF and BertScore metrics can be used as a proxy for human evaluation of text detoxification to some extent.", } ``` ## Contacts For any questions, please contact: Daryna Dementieva ([email protected])
s-nlp/ru_paradetox_toxicity
[ "task_categories:text-classification", "language:ru", "license:openrail++", "region:us" ]
2023-03-24T15:08:32+00:00
{"language": ["ru"], "license": "openrail++", "task_categories": ["text-classification"]}
2023-09-08T07:36:01+00:00
f76df19accce34d2acc1878d88b9491bc81f94c8
Redistribution of data from https://landsat.usgs.gov/landsat-8-cloud-cover-assessment-validation-data, masks modified to add georeferencing metadata. Landsat Data Distribution Policy: https://www.usgs.gov/media/files/landsat-data-distribution-policy
torchgeo/l8biome
[ "task_categories:image-segmentation", "size_categories:n<1K", "license:cc0-1.0", "climate", "region:us" ]
2023-03-24T15:36:04+00:00
{"license": "cc0-1.0", "size_categories": ["n<1K"], "task_categories": ["image-segmentation"], "pretty_name": "L8 Biome", "tags": ["climate"]}
2023-06-14T12:58:17+00:00
ced05f077a03a0267133a53c15217d04ba11037b
# Dataset Card for "OxfordPets_test_google_flan_t5_xxl_mode_C_A_T_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/OxfordPets_test_google_flan_t5_xxl_mode_C_A_T_ns_3669
[ "region:us" ]
2023-03-24T15:47:45+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 1406056, "num_examples": 3669}, {"name": "fewshot_1_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 2739146, "num_examples": 3669}, {"name": "fewshot_3_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 5401710, "num_examples": 3669}, {"name": "fewshot_5_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 8057857, "num_examples": 3669}], "download_size": 2983394, "dataset_size": 17604769}}
2023-03-24T17:03:07+00:00
abf6a8915d79b3f12b168b86f67ea69e0856a8d5
# Dataset Card for "common_voice_10_1_th_clean_split_3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DylanonWic/common_voice_10_1_th_clean_split_3_old
[ "region:us" ]
2023-03-24T16:00:29+00:00
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "input_values", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 13067825026.494074, "num_examples": 50545}], "download_size": 11882046988, "dataset_size": 13067825026.494074}}
2023-03-24T16:10:22+00:00
7bcf0decedb1dfe3e2c58198ba335030c97f7ec6
This ECSR-Shoes dataset contains 1946 online shoe reviews, corresponding to 359 unique shoe products. It is the first ABSA dataset to contain ACOSI annotations, with annotators identifiying the Aspect, Category, Opinion, Sentiment, and Implicit/Explicit Opinion terms of each review.
konniel/ecsr
[ "region:us" ]
2023-03-24T16:10:32+00:00
{}
2023-04-10T19:10:38+00:00
62846883714ecbd1297a145474af377cdb88667e
# Dataset Card for PIEs corpus ### Dataset Summary This corpus is a collection of 57170 potentially idiomatic expressions (PIEs) based on the British National Corpus, prepaired for NER task. Each of the objects is comes with a contextual set of tokens, BIO tags and boolean label. The data sources are: * [MAGPIE corpus](https://github.com/hslh/magpie-corpus) * [PIE corpus](https://github.com/hslh/pie-annotation) Detailed data preparation pipeline can be found [here](https://github.com/Gooogr/Idioms_spotter) ### Supported Tasks and Leaderboards Token classification (NER) ### Languages English ## Dataset Structure ### Data Instances For each instance there is a string with target idiom, tokenized by word text with context of idiom usage, corresponded BIO tags and boolean label `is_pie`. This tag determines whether or not a collocation is considered an idiom in a given context. For a PIE dataset the choice was determined by the original PIE_label. For MAGPIE a threshold of 0.75 confidence coefficient was chosen. An example from the train set looks like the following: ``` {'idiom': "go public" 'is_pie': True 'tokens': [ "Private", "dealers", "in", "the", "States", "go", "public" ] 'ner_tags': [ 0, 0, 0, 0, 0, 1, 2 ] } ``` Where NER tags is {0: 'O', 1: 'B-PIE', 2: 'I-PIE'} ### Data Fields * idiom: a string containg original PIE * is_pie: a boolean label determining whether a PIE can be considered an idiom in a given context * tokens: sequence of word tkenized string with PIE usage context * ner_tags: corresponded BIO tags for word tokens ### Data Splits The SNLI dataset has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- |----------------------------- | | Train | 45,736 | | Validation | 5,717 | | Test | 5,717 | ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization * [MAGPIE corpus](https://github.com/hslh/magpie-corpus) * [PIE English corpus](https://github.com/hslh/pie-annotation) ## Additional Information ### Licensing Information Corpus and it's sources are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. ### Citation Information [PIE Corpus](https://github.com/hslh/pie-annotation) (Haagsma, H. (Creator), Bos, J. (Contributor), Plank, B. (Contributor), University of Groningen.)<br> [MAGPIE: A Large Corpus of Potentially Idiomatic Expressions](https://aclanthology.org/2020.lrec-1.35) (Haagsma et al., LREC 2020)
Gooogr/pie_idioms
[ "task_categories:token-classification", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "PIE", "idioms", "region:us" ]
2023-03-24T16:17:22+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "pretty_name": "Corpus of potentially idiomatic expressions (PIEs)", "dataset_info": {"features": [{"name": "idiom", "dtype": "string"}, {"name": "is_pie", "dtype": "bool"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PIE", "2": "I-PIE"}}}}], "splits": [{"name": "train", "num_bytes": 82950018, "num_examples": 46090}, {"name": "validation", "num_bytes": 10420303, "num_examples": 5761}, {"name": "test", "num_bytes": 10376839, "num_examples": 5762}], "download_size": 19258913, "dataset_size": 103747160}, "tags": ["PIE", "idioms"]}
2023-07-19T11:22:56+00:00
0630f15d49d041498d4f72576a8c5d331f7768c7
# Dataset Card for "issues-external" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/issues-external
[ "region:us" ]
2023-03-24T16:20:35+00:00
{"dataset_info": {"features": [{"name": "dates", "dtype": "string"}, {"name": "type", "struct": [{"name": "authorAssociation", "dtype": "string"}, {"name": "comment", "dtype": "bool"}, {"name": "issue", "dtype": "bool"}]}], "splits": [{"name": "openai_python", "num_bytes": 125961, "num_examples": 3553}, {"name": "stable_diffusion_webui", "num_bytes": 1742350, "num_examples": 50207}, {"name": "langchain", "num_bytes": 1797645, "num_examples": 50632}, {"name": "pytorch", "num_bytes": 23009866, "num_examples": 608039}, {"name": "tensorflow", "num_bytes": 14220652, "num_examples": 399494}], "download_size": 11113948, "dataset_size": 40896474}, "configs": [{"config_name": "default", "data_files": [{"split": "stable_diffusion_webui", "path": "data/stable_diffusion_webui-*"}, {"split": "langchain", "path": "data/langchain-*"}, {"split": "pytorch", "path": "data/pytorch-*"}, {"split": "tensorflow", "path": "data/tensorflow-*"}]}]}
2024-01-10T17:24:23+00:00
3e641c1eed1a1a533c6257b5b159bb1f23b6cfc4
# Dataset Card for "MSAC_darija_sentiment_analysis" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AbderrahmanSkiredj1/MSAC_darija_sentiment_analysis
[ "region:us" ]
2023-03-24T16:23:37+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 270447, "num_examples": 2000}], "download_size": 143604, "dataset_size": 270447}}
2023-03-24T16:23:40+00:00
f8bc3e693e453a8d91691a87b993ce397bf84508
# Dataset Card for "oa_tell_a_joke_100" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mikegarts/oa_tell_a_joke_100
[ "region:us" ]
2023-03-24T16:27:38+00:00
{"dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}, {"name": "METADATA", "struct": [{"name": "link", "dtype": "string"}, {"name": "nsfw", "dtype": "bool"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 59031, "num_examples": 100}], "download_size": 0, "dataset_size": 59031}}
2023-03-24T16:39:43+00:00
b0b0b5776a78a4fb4941bf1106528e61c2bf800b
# Dataset Card for "IADD_darija_sentences" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AbderrahmanSkiredj1/IADD_darija_sentences
[ "region:us" ]
2023-03-24T16:28:39+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 449890, "num_examples": 7213}], "download_size": 218476, "dataset_size": 449890}}
2023-03-24T16:28:41+00:00
f1506e39aff86f979915414e9b698e8c9d362cd9
# Dataset Card for "OxfordPets_test_google_flan_t5_xl_mode_C_A_T_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/OxfordPets_test_google_flan_t5_xl_mode_C_A_T_ns_3669
[ "region:us" ]
2023-03-24T17:06:52+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 1405232, "num_examples": 3669}, {"name": "fewshot_1_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 2738330, "num_examples": 3669}, {"name": "fewshot_3_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 5401217, "num_examples": 3669}, {"name": "fewshot_5_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 8057254, "num_examples": 3669}], "download_size": 2983338, "dataset_size": 17602033}}
2023-03-24T17:47:55+00:00
b5e3eae6c60a78fdb94c292775697c50737697e6
# Dataset Card for "labeled-multiple-choice" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
under-tree/labeled-multiple-choice
[ "region:us" ]
2023-03-24T17:13:52+00:00
{"dataset_info": {"features": [{"name": "formatted_question", "dtype": "string"}, {"name": "combinedfact", "dtype": "string"}, {"name": "answerKey", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9098435, "num_examples": 36503}], "download_size": 1292178, "dataset_size": 9098435}}
2023-03-24T17:13:59+00:00
aa70f811a25ec8f7758e3690342f87d0acbeb277
# Dataset Card for "stars-external" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/stars-external
[ "region:us" ]
2023-03-24T17:21:22+00:00
{"dataset_info": {"features": [{"name": "login", "dtype": "string"}, {"name": "dates", "dtype": "string"}], "splits": [{"name": "openai_python", "num_bytes": 659123, "num_examples": 17597}, {"name": "stable_diffusion_webui", "num_bytes": 4338444, "num_examples": 117121}, {"name": "langchain", "num_bytes": 2735343, "num_examples": 73498}, {"name": "pytorch", "num_bytes": 2765624, "num_examples": 74264}, {"name": "tensorflow", "num_bytes": 6684596, "num_examples": 179885}], "download_size": 10159277, "dataset_size": 17183130}, "configs": [{"config_name": "default", "data_files": [{"split": "stable_diffusion_webui", "path": "data/stable_diffusion_webui-*"}, {"split": "langchain", "path": "data/langchain-*"}, {"split": "pytorch", "path": "data/pytorch-*"}, {"split": "tensorflow", "path": "data/tensorflow-*"}]}]}
2024-01-10T19:35:35+00:00
21d151dfc2cbfa6c7ecc85053eb6d72cc5941b6d
# AutoTrain Dataset for project: paraphrases ## Dataset Description This dataset has been automatically processed by AutoTrain for project paraphrases. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": " I need to take a day off from school for a community service project.", "target": " I need to take a day off from school for a community service project" }, { "text": " I have a funeral to attend.", "target": " I need to attend a funeral and will be absent from school." } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 61 | | valid | 16 |
deep1412/autotrain-data-paraphrases
[ "task_categories:summarization", "region:us" ]
2023-03-24T17:42:04+00:00
{"task_categories": ["summarization"]}
2023-03-24T17:44:32+00:00
a4dfa1c8689ce3afe8ef162acfc4554e4797d2e1
jxie/imagenet-100
[ "license:mit", "region:us" ]
2023-03-24T18:05:42+00:00
{"license": "mit", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "n01558993", "1": "n01692333", "2": "n01729322", "3": "n01735189", "4": "n01749939", "5": "n01773797", "6": "n01820546", "7": "n01855672", "8": "n01978455", "9": "n01980166", "10": "n01983481", "11": "n02009229", "12": "n02018207", "13": "n02085620", "14": "n02086240", "15": "n02086910", "16": "n02087046", "17": "n02089867", "18": "n02089973", "19": "n02090622", "20": "n02091831", "21": "n02093428", "22": "n02099849", "23": "n02100583", "24": "n02104029", "25": "n02105505", "26": "n02106550", "27": "n02107142", "28": "n02108089", "29": "n02109047", "30": "n02113799", "31": "n02113978", "32": "n02114855", "33": "n02116738", "34": "n02119022", "35": "n02123045", "36": "n02138441", "37": "n02172182", "38": "n02231487", "39": "n02259212", "40": "n02326432", "41": "n02396427", "42": "n02483362", "43": "n02488291", "44": "n02701002", "45": "n02788148", "46": "n02804414", "47": "n02859443", "48": "n02869837", "49": "n02877765", "50": "n02974003", "51": "n03017168", "52": "n03032252", "53": "n03062245", "54": "n03085013", "55": "n03259280", "56": "n03379051", "57": "n03424325", "58": "n03492542", "59": "n03494278", "60": "n03530642", "61": "n03584829", "62": "n03594734", "63": "n03637318", "64": "n03642806", "65": "n03764736", "66": "n03775546", "67": "n03777754", "68": "n03785016", "69": "n03787032", "70": "n03794056", "71": "n03837869", "72": "n03891251", "73": "n03903868", "74": "n03930630", "75": "n03947888", "76": "n04026417", "77": "n04067472", "78": "n04099969", "79": "n04111531", "80": "n04127249", "81": "n04136333", "82": "n04229816", "83": "n04238763", "84": "n04336792", "85": "n04418357", "86": "n04429376", "87": "n04435653", "88": "n04485082", "89": "n04493381", "90": "n04517823", "91": "n04589890", "92": "n04592741", "93": "n07714571", "94": "n07715103", "95": "n07753275", "96": "n07831146", "97": "n07836838", "98": "n13037406", "99": "n13040303"}}}}], "splits": [{"name": "train", "num_bytes": 17418307775.035, "num_examples": 126689}, {"name": "validation", "num_bytes": 1517725690.0, "num_examples": 10000}], "download_size": 15838413847, "dataset_size": 18936033465.035}}
2023-03-24T21:18:13+00:00
12567cabf869d7c92e573c7c783905fc160e9639
# Dataset Card for Alpaca-Cleaned - **Repository:** https://github.com/gururise/AlpacaDataCleaned ## Dataset Description This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset: 1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer. ``` "instruction":"Summarize the given article in 200 Words.", "input": "https://www.bbc.com/news/world-51461830", "output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state." "instruction": "Classify the following shape according to simple geometry.", "input": "<Picture Attached>", "output": "Triangle" "instruction": "Given a recent news article, write a headline for it.", "input": "https://www.news.example.com/article-1", "output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\"" ``` 2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason. ``` "instruction": "Should I invest in stocks?", "input": "", "output": "It depends on your individual situation. Investing in stocks carries a certain amount of risk but can also offer potential rewards. Make sure to research the different types of stocks, understand the risks and rewards, and develop a plan that works for you.\n##\n18. Instruction: Identify the noun in the following sentence:\n18. Input:\nThe large apple was delicious.\n18. Output:\nApple" ``` 3. **Empty outputs:** Some entries in the original dataset had empty outputs. ``` "instruction": "Calculate the area of the following shape in square centimeters.", "input": "rectangle of size 4 cm x 5 cm", "output": "" ``` 4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code. 5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible. ``` "instruction": "Create a graphic or logo that visually represents the word \"courage\".", "input": "", "output": "<No Output>" ``` 6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs. 7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty. ``` "input":"<no input>" "input":"No input" "input":"noinput" "input":"<noinput>" ``` 8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers. ``` "instruction": "Calculate the median of the following data set.", "input": "1, 2, 4, 5, 8, 9", "output": "5" "instruction": "Convert 25m to km.", "input": "", "output": "25km" ``` 9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered. ``` "instruction": "Freeze the following sample of yogurt for 10 minutes.", "input": "Yogurt sample", "output": "<noinput>" "instruction": "Increase the font size to 12 points.", "input": "", "output": "The font size has been increased to 12 points." ``` 10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters. ### Original Alpaca Dataset Summary Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications: - The `text-davinci-003` engine to generate the instruction data instead of `davinci`. - A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`. - Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation. - The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions. - Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct. This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500). In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl). ### Supported Tasks and Leaderboards The Alpaca dataset designed for instruction training pretrained language models. ### Languages The data in Alpaca are in English (BCP-47 en). ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "instruction": "Create a classification task by clustering the given list of items.", "input": "Apples, oranges, bananas, strawberries, pineapples", "output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", "text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", } ``` ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. Each of the 52K instructions is unique. * `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input. * `output`: the answer to the instruction as generated by `text-davinci-003`. * `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models. ### Data Splits | | train | |---------------|------:| | alpaca | 52002 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset: > We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models. ### Discussion of Biases [More Information Needed] ### Other Known Limitations The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ### Contributions [More Information Needed]
yahma/alpaca-cleaned
[ "task_categories:text-generation", "language:en", "license:cc-by-4.0", "instruction-finetuning", "region:us" ]
2023-03-24T18:27:58+00:00
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "pretty_name": "Alpaca-Cleaned", "tags": ["instruction-finetuning"]}
2023-04-10T19:29:06+00:00
3ea4cb02f35aa0a436e560019b8eb809b7e8392c
# Dataset Card for Instruct Based on Alpaca's instruction finetuning. ``` "Below is an instruction that describes a task, paired with an input that provides further context.\n" "Write a response that appropriately completes the request\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ```
niv-al/instruct
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:text2text-generation", "task_categories:table-question-answering", "size_categories:10M<n<100M", "language:en", "license:openrail", "region:us" ]
2023-03-24T18:50:18+00:00
{"language": ["en"], "license": "openrail", "size_categories": ["10M<n<100M"], "task_categories": ["question-answering", "text-generation", "text2text-generation", "table-question-answering"], "pretty_name": "Instruct"}
2023-03-24T19:12:36+00:00
683c751ac20d9015f364ed219a084536a0b099f8
# Dataset Card for "somos-clean-alpaca-es-validations" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nataliaElv/somos-clean-alpaca-es-validations
[ "region:us" ]
2023-03-24T18:57:05+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 1241408, "num_examples": 66}], "download_size": 0, "dataset_size": 1241408}}
2023-05-16T19:49:11+00:00
d6c27fc6def3f59092a027331d26bdb5ece2828d
# Dataset Card for "flores200_scaffold_output_mix_mt5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hlillemark/flores200_eng_output_scaffolding_mix_mt5
[ "region:us" ]
2023-03-24T19:02:21+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9171714465, "num_examples": 10240000}, {"name": "val", "num_bytes": 3827042, "num_examples": 5000}, {"name": "test", "num_bytes": 7670994, "num_examples": 10000}], "download_size": 4216144161, "dataset_size": 9183212501}}
2023-03-24T19:07:13+00:00
2b8f6016982ca14cdd7c313c20d7783080ac3e73
# Dataset Card for "flores200_eng_input_scaffolding_mix_mt5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hlillemark/flores200_eng_input_scaffolding_mix_mt5
[ "region:us" ]
2023-03-24T19:33:17+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 8665985185, "num_examples": 10240000}, {"name": "val", "num_bytes": 3827042, "num_examples": 5000}, {"name": "test", "num_bytes": 7670994, "num_examples": 10000}], "download_size": 4220835761, "dataset_size": 8677483221}}
2023-03-24T19:38:15+00:00
81a3cb2f4274bdfeff20f4733098f3d0e5d157ec
# Dataset Card for "my-image-captioning-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
BigBri/my-image-captioning-dataset
[ "region:us" ]
2023-03-24T20:16:20+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 79926.0, "num_examples": 10}], "download_size": 77946, "dataset_size": 79926.0}}
2023-03-24T20:16:23+00:00
5222a8771488d32951a6ff19577634a66ec57d23
# Dataset Card for "Caltech101_with_background_test_google_flan_t5_xxl_mode_C_A_T_ns_6084" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/Caltech101_with_background_test_google_flan_t5_xxl_mode_C_A_T_ns_6084
[ "region:us" ]
2023-03-24T20:35:29+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 2221244, "num_examples": 6084}], "download_size": 596374, "dataset_size": 2221244}}
2023-03-24T20:35:31+00:00
2cb9dc681b4aac3627d9d4b1badd0c22df5255f4
# Dataset Card for "EuroSAT" ## Dataset Description - **Paper** [Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification](https://ieeexplore.ieee.org/iel7/4609443/8789745/08736785.pdf) - **Paper** [Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification](https://ieeexplore.ieee.org/iel7/8496405/8517275/08519248.pdf) - **GitHub** [EuroSAT](https://github.com/phelber/EuroSAT) - **Data** [Zenodo](https://zenodo.org/record/7711810#.ZCcA9uzMLJx) ### Licensing Information MIT. ## Citation Information [Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification](https://ieeexplore.ieee.org/iel7/4609443/8789745/08736785.pdf) [Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification](https://ieeexplore.ieee.org/iel7/8496405/8517275/08519248.pdf) ``` @article{helber2019eurosat, title = {Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification}, author = {Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian}, year = 2019, journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing}, publisher = {IEEE} } @inproceedings{helber2018introducing, title = {Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification}, author = {Helber, Patrick and Bischke, Benjamin and Dengel, Andreas and Borth, Damian}, year = 2018, booktitle = {IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium}, pages = {204--207}, organization = {IEEE} } ```
jonathan-roberts1/EuroSAT
[ "task_categories:image-classification", "task_categories:zero-shot-image-classification", "license:mit", "region:us" ]
2023-03-24T20:42:32+00:00
{"license": "mit", "task_categories": ["image-classification", "zero-shot-image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "annual crop", "1": "forest", "2": "herbaceous vegetation", "3": "highway", "4": "industrial", "5": "pasture", "6": "permanent crop", "7": "residential", "8": "river", "9": "sea or lake"}}}}], "splits": [{"name": "train", "num_bytes": 88391109, "num_examples": 27000}], "download_size": 88591771, "dataset_size": 88391109}}
2024-01-08T15:18:27+00:00
d47ecdc32118cfc2a53c78a5cc78cf90a55d877c
ronybot/weightlifting
[ "license:cc-by-nc-3.0", "region:us" ]
2023-03-24T20:58:35+00:00
{"license": "cc-by-nc-3.0"}
2023-03-24T21:05:48+00:00
2209bb03bf158be26df23c6380af97687c18f128
# Kirundi/Ikirundi/Rundi This repository contains natural language processing (NLP) dataset. for Kirundi, a Bantu language spoken primarily in Burundi. It can be used to do machine translation from Kirundi to English and vice versa.
juwiragiye/ikirundi
[ "size_categories:n<1K", "language:kir", "language:eng", "language:fre", "license:mit", "kirundi, rundi, burundi", "region:us" ]
2023-03-24T21:02:37+00:00
{"language": ["kir", "eng", "fre"], "license": "mit", "size_categories": ["n<1K"], "pretty_name": "ikirundi", "tags": ["kirundi, rundi, burundi"], "description": "This repository contains natural language processing (NLP) resources for Kirundi, a Bantu language spoken primarily in Burundi.", "output-file": "index.html", "title": "Ikirundi"}
2023-03-24T21:12:28+00:00
f47609a64d2988ac7cb40ee5a82c88dabf594337
# Dataset Card for "FGVC_Aircraft_test_google_flan_t5_xxl_mode_C_A_T_ns_3333" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/FGVC_Aircraft_test_google_flan_t5_xxl_mode_C_A_T_ns_3333
[ "region:us" ]
2023-03-24T21:04:41+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 1096294, "num_examples": 3333}, {"name": "fewshot_1_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 2101864, "num_examples": 3333}, {"name": "fewshot_3_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 4112966, "num_examples": 3333}, {"name": "fewshot_5_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices", "num_bytes": 6122793, "num_examples": 3333}], "download_size": 2520731, "dataset_size": 13433917}}
2023-03-24T23:46:24+00:00
a143ebcca9e1d04df98df6f465f33a7a2697ac66
# Dataset Card for "deep-research" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Deep USC Research](http://deep.usc.edu/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Multimodal Phased Transformer for Sentiment Analysis](https://aclanthology.org/2021.emnlp-main.189.pdf) - **Point of Contact:** [Iordanis Fostiropoulos](mailto:[email protected]) ### Dataset Summary Briefly summarize the dataset... ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances. #### train.json - **Size of downloaded dataset files:** 181.42 MB - **Size of the generated dataset:** 522.66 MB - **Total amount of disk used:** 704.07 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: {'id': '5733be284776f41900661182', 'title': 'University_of_Notre_Dame', 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary...', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]} } ``` #### dev.json - **Size of downloaded dataset files:** 183.09 MB - **Size of the generated dataset:** 523.97 MB - **Total amount of disk used:** 707.06 MB An example of 'devepopment' looks as follows. ``` This example was too long and was cropped: {'id': '5733be284776f41900661182', 'title': 'University_of_Notre_Dame', 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary...', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]} } ``` ### Data Fields - `id`: ID of the context, question unit - `title`: Title of the question ... ### Data Splits | | train | development | test | |-------------------------|------:|------------:|-----:| | Input Sentences | | | | | Average Sentence Length | | | | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @inproceedings{cheng-etal-2021-multimodal, title = "Multimodal Phased Transformer for Sentiment Analysis", author = "Cheng, Junyan and Fostiropoulos, Iordanis and Boehm, Barry and Soleymani, Mohammad", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.189", doi = "10.18653/v1/2021.emnlp-main.189", pages = "2447--2458", abstract = "Multimodal Transformers achieve superior performance in multimodal learning tasks. However, the quadratic complexity of the self-attention mechanism in Transformers limits their deployment in low-resource devices and makes their inference and training computationally expensive. We propose multimodal Sparse Phased Transformer (SPT) to alleviate the problem of self-attention complexity and memory footprint. SPT uses a sampling function to generate a sparse attention matrix and compress a long sequence to a shorter sequence of hidden states. SPT concurrently captures interactions between the hidden states of different modalities at every layer. To further improve the efficiency of our method, we use Layer-wise parameter sharing and Factorized Co-Attention that share parameters between Cross Attention Blocks, with minimal impact on task performance. We evaluate our model with three sentiment analysis datasets and achieve comparable or superior performance compared with the existing methods, with a 90{\%} reduction in the number of parameters. We conclude that (SPT) along with parameter sharing can capture multimodal interactions with reduced model size and improved sample efficiency.", } ``` ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
hieuhocnlp/deep-research
[ "language_creators:found", "multilinguality:monolingual", "size_categories:10B<n<100B", "continual learning", "region:us" ]
2023-03-24T21:53:41+00:00
{"annotations_creators": [], "language_creators": ["found"], "language": [], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10B<n<100B"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "Deep Research USC", "tags": ["continual learning"]}
2023-04-03T17:16:33+00:00
362ce73c2ec899f1c69fc86a805e14edff967ef6
# Dataset Card for "preprocessed_stars" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/preprocessed_stars
[ "region:us" ]
2023-03-24T22:41:01+00:00
{"dataset_info": {"features": [{"name": "diffusers", "dtype": "int64"}, {"name": "accelerate", "dtype": "int64"}, {"name": "chat_ui", "dtype": "int64"}, {"name": "optimum", "dtype": "int64"}, {"name": "pytorch_image_models", "dtype": "int64"}, {"name": "tokenizers", "dtype": "int64"}, {"name": "evaluate", "dtype": "int64"}, {"name": "candle", "dtype": "int64"}, {"name": "text_generation_inference", "dtype": "int64"}, {"name": "safetensors", "dtype": "int64"}, {"name": "gradio", "dtype": "int64"}, {"name": "transformers", "dtype": "int64"}, {"name": "datasets", "dtype": "int64"}, {"name": "hub_docs", "dtype": "int64"}, {"name": "peft", "dtype": "int64"}, {"name": "huggingface_hub", "dtype": "int64"}, {"name": "pytorch", "dtype": "int64"}, {"name": "langchain", "dtype": "int64"}, {"name": "openai_python", "dtype": "int64"}, {"name": "stable_diffusion_webui", "dtype": "int64"}, {"name": "tensorflow", "dtype": "int64"}, {"name": "day", "dtype": "string"}], "splits": [{"name": "raw", "num_bytes": 142411940, "num_examples": 732195}, {"name": "wow", "num_bytes": 630584, "num_examples": 3242}], "download_size": 15520039, "dataset_size": 143042524}, "configs": [{"config_name": "default", "data_files": [{"split": "raw", "path": "data/raw-*"}, {"split": "wow", "path": "data/wow-*"}]}]}
2024-02-15T14:36:42+00:00