sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
af809eed59739d2080e629b9fa1b8a18ad9efec8
|
Video QA Dataset in English Chinese Japanese Deutsch
On video download:
1. `pip install pandas numpy requests mpi4py`
2. To download on one job: `python download.py --csv_path results_2M_train.csv --partitions 1 --part 0 --data_dir ./data --processes 8`. You can split this across N concurrent jobs by choosing `--partitions N` partitions and running each job with different `--part $idx`. You can also specify the number of processes, recommended one per cpu.
|
JosephusCheung/GuanacoVideoQA
|
[
"license:gpl-3.0",
"region:us"
] |
2023-05-16T09:00:13+00:00
|
{"license": "gpl-3.0"}
|
2023-05-16T09:24:49+00:00
|
80f7778f50420e1d08aafde990cb1ff1b24a7d1d
|
Thanks for [hikariming](https://github.com/hikariming/alpaca_chinese_dataset/tree/main).
This dataset was created by merging "翻译后的中文数据" and "其他中文问题补充" from [hikariming/alpaca_chinese_dataset](https://github.com/hikariming/alpaca_chinese_dataset/tree/main) in a certain format.
|
Laurie/alpaca_chinese_dataset
|
[
"region:us"
] |
2023-05-16T09:31:08+00:00
|
{}
|
2023-05-16T09:51:41+00:00
|
eee2e75ec02f87011b91569d44620051b21f5b35
|
J-Mourad/MNAD_Sample
|
[
"license:other",
"region:us"
] |
2023-05-16T10:00:23+00:00
|
{"license": "other"}
|
2023-05-16T10:01:20+00:00
|
|
ade24791f9db4f609e24d58625d0b03452cdd52e
|
# Dataset Card for "olist_encoded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
markbotterill/olist_encoded
|
[
"region:us"
] |
2023-05-16T10:06:27+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "Words Per Review", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "hidden_state", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 99759813, "num_examples": 23860}, {"name": "test", "num_bytes": 25686431, "num_examples": 5965}], "download_size": 112853840, "dataset_size": 125446244}}
|
2023-05-16T10:06:39+00:00
|
e2be00d8c599aab45c97e4beeb223d85db38663a
|
# Dataset Card for "skin_cancer_small_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Pranavkpba2000/skin_cancer_small_dataset
|
[
"region:us"
] |
2023-05-16T10:12:00+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AK", "1": "BCC", "2": "BKL", "3": "DF", "4": "MEL", "5": "NV", "6": "SCC", "7": "VASC"}}}}], "splits": [{"name": "train", "num_bytes": 66578294.72, "num_examples": 11360}, {"name": "test", "num_bytes": 17394813.72, "num_examples": 2840}], "download_size": 83755065, "dataset_size": 83973108.44}}
|
2023-05-16T10:12:18+00:00
|
2c119ab0cd139892301be1f698211b6ffe3ec790
|
# Dataset Card for "style-diffusion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
center-for-humans-and-machines/style-diffusion
|
[
"region:us"
] |
2023-05-16T10:27:45+00:00
|
{"dataset_info": {"features": [{"name": "vectorId", "dtype": "string"}, {"name": "medianYear", "dtype": "int32"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 3448928, "num_examples": 1113}], "download_size": 0, "dataset_size": 3448928}}
|
2023-06-30T16:45:02+00:00
|
c8856bf4b26f0a17fc4e70a13d02bc9797df1115
|
# Dataset Card for "IEMOCAP_Text_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minoosh/IEMOCAP_Text_dataset
|
[
"region:us"
] |
2023-05-16T10:36:42+00:00
|
{"dataset_info": {"features": [{"name": "TURN_NAME", "dtype": "string"}, {"name": "emotion", "dtype": {"class_label": {"names": {"0": "ang", "1": "hap", "2": "neu", "3": "sad"}}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "Session1", "num_bytes": 100099, "num_examples": 1085}, {"name": "Session2", "num_bytes": 105421, "num_examples": 1023}, {"name": "Session3", "num_bytes": 104680, "num_examples": 1151}, {"name": "Session4", "num_bytes": 99515, "num_examples": 1031}, {"name": "Session5", "num_bytes": 121589, "num_examples": 1241}], "download_size": 261307, "dataset_size": 531304}}
|
2023-05-16T10:37:14+00:00
|
48d1cb8b4504de2ce97095cafa6054227e4af521
|
# Dataset Card for "gaps_de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bjoernp/gaps_de
|
[
"region:us"
] |
2023-05-16T10:36:53+00:00
|
{"dataset_info": {"features": [{"name": "sentences", "dtype": "string"}, {"name": "sentences_de", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45790544999, "num_examples": 178674546}], "download_size": 26834208249, "dataset_size": 45790544999}}
|
2023-05-16T10:56:19+00:00
|
cac535e3167c93ce95d80c0a214d6044577180cb
|
# Dataset Card for "IEMOCAP_Speech_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minoosh/IEMOCAP_Speech_dataset
|
[
"region:us"
] |
2023-05-16T10:40:01+00:00
|
{"dataset_info": {"features": [{"name": "TURN_NAME", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "emotion", "dtype": {"class_label": {"names": {"0": "ang", "1": "hap", "2": "neu", "3": "sad"}}}}], "splits": [{"name": "Session1", "num_bytes": 165158903.64, "num_examples": 1085}, {"name": "Session2", "num_bytes": 154202695.13, "num_examples": 1023}, {"name": "Session3", "num_bytes": 158294386.59, "num_examples": 1151}, {"name": "Session4", "num_bytes": 147780976.55, "num_examples": 1031}, {"name": "Session5", "num_bytes": 170101711.098, "num_examples": 1241}], "download_size": 788474562, "dataset_size": 795538673.0080001}}
|
2023-05-16T10:58:34+00:00
|
3d27cbeb7b6d2b33baa2e1a41e93ab8505e742ab
|
# About the MNAD Dataset
The MNAD corpus is a collection of over **1 million Moroccan news articles** written in the modern Arabic language. These news articles have been gathered from 11 prominent electronic news sources. The dataset is made available to the academic community for research purposes, such as data mining (clustering, classification, etc.), information retrieval (ranking, search, etc.), and other non-commercial activities.
## Dataset Fields
- Title: The title of the article
- Body: The body of the article
- Category: The category of the article
- Source: The Electronic Newspaper source of the article
## About Version 1 of the Dataset (MNAD.v1)
Version 1 of the dataset comprises **418,563** articles classified into 19 categories. The data was collected from well-known electronic news sources, namely Akhbarona.ma, Hespress.ma, Hibapress.com, and Le360.com. The articles were stored in four separate CSV files, each corresponding to the news website source. Each CSV file contains three fields: Title, Body, and Category of the news article.
The dataset is rich in Arabic vocabulary, with approximately 906,125 unique words. It has been utilized as a benchmark in the research paper:
```"A Moroccan News Articles Dataset (MNAD) For Arabic Text Categorization". In 2021 International Conference on Decision Aid Sciences and Application (DASA).```
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv1](https://www.kaggle.com/datasets/jmourad100/mnad-moroccan-news-articles-dataset)
- Huggingface Datasets: [MNADv1](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
## About Version 2 of the Dataset (MNAD.v2)
Version 2 of the MNAD dataset includes an additional **653,901** articles, bringing the total number of articles to over 1 million (**1,069,489**), classified into the same 19 categories as in version 1. The new documents were collected from seven additional prominent Moroccan news websites, namely al3omk.com, medi1news.com, alayam24.com, anfaspress.com, alyaoum24.com, barlamane.com, and SnrtNews.com.
The newly collected articles have been merged with the articles from the previous version into a single CSV file named ```MNADv2.csv```. This file includes an additional column called "Source" to indicate the source of each news article.
Furthermore, MNAD.v2 incorporates improved pre-processing techniques and data-cleaning methods. These enhancements involve removing duplicates, eliminating multiple spaces, discarding rows with NaN values, replacing new lines with "\n", excluding very long and very short articles, and removing non-Arabic articles. These additions and improvements aim to enhance the usability and value of the MNAD dataset for researchers and practitioners in the field of Arabic text analysis.
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
- Huggingface Datasets: [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
## Citation
If you use our data, please cite the following paper:
```bibtex
@inproceedings{MNAD2021,
author = {Mourad Jbene and
Smail Tigani and
Rachid Saadane and
Abdellah Chehri},
title = {A Moroccan News Articles Dataset ({MNAD}) For Arabic Text Categorization},
year = {2021},
publisher = {{IEEE}},
booktitle = {2021 International Conference on Decision Aid Sciences and Application ({DASA})}
doi = {10.1109/dasa53625.2021.9682402},
url = {https://doi.org/10.1109/dasa53625.2021.9682402},
}
```
|
J-Mourad/MNAD.v1
|
[
"region:us"
] |
2023-05-16T10:43:58+00:00
|
{}
|
2023-05-16T11:23:40+00:00
|
b6ab21aa6fd4b466b88d6de69a86f94da9ef0101
|
# About the MNAD Dataset
The MNAD corpus is a collection of over **1 million Moroccan news articles** written in the modern Arabic language. These news articles have been gathered from 11 prominent electronic news sources. The dataset is made available to the academic community for research purposes, such as data mining (clustering, classification, etc.), information retrieval (ranking, search, etc.), and other non-commercial activities.
## Dataset Fields
- Title: The title of the article
- Body: The body of the article
- Category: The category of the article
- Source: The Electronic Newspaper source of the article
## About Version 1 of the Dataset (MNAD.v1)
Version 1 of the dataset comprises **418,563** articles classified into 19 categories. The data was collected from well-known electronic news sources, namely Akhbarona.ma, Hespress.ma, Hibapress.com, and Le360.com. The articles were stored in four separate CSV files, each corresponding to the news website source. Each CSV file contains three fields: Title, Body, and Category of the news article.
The dataset is rich in Arabic vocabulary, with approximately 906,125 unique words. It has been utilized as a benchmark in the research paper:
```"A Moroccan News Articles Dataset (MNAD) For Arabic Text Categorization". In 2021 International Conference on Decision Aid Sciences and Application (DASA).```
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv1](https://www.kaggle.com/datasets/jmourad100/mnad-moroccan-news-articles-dataset)
- Huggingface Datasets: [MNADv1](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
## About Version 2 of the Dataset (MNAD.v2)
Version 2 of the MNAD dataset includes an additional **653,901** articles, bringing the total number of articles to over 1 million (**1,069,489**), classified into the same 19 categories as in version 1. The new documents were collected from seven additional prominent Moroccan news websites, namely al3omk.com, medi1news.com, alayam24.com, anfaspress.com, alyaoum24.com, barlamane.com, and SnrtNews.com.
The newly collected articles have been merged with the articles from the previous version into a single CSV file named ```MNADv2.csv```. This file includes an additional column called "Source" to indicate the source of each news article.
Furthermore, MNAD.v2 incorporates improved pre-processing techniques and data-cleaning methods. These enhancements involve removing duplicates, eliminating multiple spaces, discarding rows with NaN values, replacing new lines with "\n", excluding very long and very short articles, and removing non-Arabic articles. These additions and improvements aim to enhance the usability and value of the MNAD dataset for researchers and practitioners in the field of Arabic text analysis.
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
- Huggingface Datasets: [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
## Citation
If you use our data, please cite the following paper:
```bibtex
@inproceedings{MNAD2021,
author = {Mourad Jbene and
Smail Tigani and
Rachid Saadane and
Abdellah Chehri},
title = {A Moroccan News Articles Dataset ({MNAD}) For Arabic Text Categorization},
year = {2021},
publisher = {{IEEE}},
booktitle = {2021 International Conference on Decision Aid Sciences and Application ({DASA})}
doi = {10.1109/dasa53625.2021.9682402},
url = {https://doi.org/10.1109/dasa53625.2021.9682402},
}
```
|
J-Mourad/MNAD.v2
|
[
"region:us"
] |
2023-05-16T10:53:19+00:00
|
{}
|
2023-05-16T11:22:21+00:00
|
0bd4e3f9f960e8884fa3b29867552f2e86d8a110
|
# Dataset Card for "VQAv2_minival_no_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_minival_no_image
|
[
"region:us"
] |
2023-05-16T11:04:54+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "id", "dtype": "int64"}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "captions_module_filter", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "clip_tags_LAION_ViT_H_14_2B", "sequence": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B", "sequence": "string"}, {"name": "Attributes_ViT_L_14_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_wo_openai", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_wo_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_bigG_14_2B_wo_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_bigG_14_2B_with_openai", "sequence": "string"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_B_16_with_openai", "sequence": "string"}], "splits": [{"name": "validation", "num_bytes": 1766679196, "num_examples": 25994}], "download_size": 340842185, "dataset_size": 1766679196}}
|
2023-05-16T11:05:13+00:00
|
3cf524bd91fd29498b4ccd60d13c51b385eef530
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_T_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_T_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T11:17:42+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_", "num_bytes": 3702142, "num_examples": 25994}], "download_size": 1326416, "dataset_size": 3702142}}
|
2023-05-16T11:17:44+00:00
|
dadd4c480f90ba399af0359226efca7967a243a2
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_T_A_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_T_A_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T11:24:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_", "num_bytes": 3716868, "num_examples": 25994}], "download_size": 1341254, "dataset_size": 3716868}}
|
2023-05-16T11:24:10+00:00
|
159848f4f3b6086dbdbf8bcabd69b1c5061acc91
|
# Dataset Card for Dataset Name
Will update soon!
|
Ryan1122/multiturn_cn_18k
|
[
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-4.0",
"multiturn",
"self-instruct",
"CN",
"region:us"
] |
2023-05-16T11:28:53+00:00
|
{"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "tags": ["multiturn", "self-instruct", "CN"]}
|
2023-12-07T04:12:02+00:00
|
5a43a0c4637b927744551e07efb0aac3de1c902b
|
Lfr111/cv-data
|
[
"license:openrail",
"region:us"
] |
2023-05-16T11:36:15+00:00
|
{"license": "openrail"}
|
2023-05-16T11:36:16+00:00
|
|
0213a4622830a243929317bdced6ee3e339a0e8c
|
MacMister PST to OLM Converter is a quick effective utility that allows batch convert PST files to OLM at once with emails, contacts, calendars, and other items without data loss. The software is easy, reliable, and safe to use. Anyone can operate it and perform the conversion with 100% accuracy. PST to OLM Converter contains numerous features as well with the help of users can convert PST files to OLM on Mac according to their requirements. And, the most amazing aspect of the tool is that it will keep the originality of PST files accurate during the conversion. It will not disappoint users by delivering the results that are not upto the mark. Moreover, PST to OLM Converter has free demo with the help of users can analyze the conversion process for free. They can also convert first 25 files from PST to OLM free of cost. Know more - https://www.macmister.com/pst-to-olm/
|
macsoftwaresolutions/psttoolmconverter
|
[
"region:us"
] |
2023-05-16T11:49:01+00:00
|
{}
|
2023-07-13T12:06:08+00:00
|
0be529aaae159aaea1ea1817bcee032c7e8d9104
|
sidovic/squad_arabic
|
[
"license:unknown",
"region:us"
] |
2023-05-16T11:49:41+00:00
|
{"license": "unknown"}
|
2023-05-16T11:52:10+00:00
|
|
1c9ff5e372eef8a283612d5e1c80b03bdda9e934
|
# German ConceptNet Numberbatch Vectors
This repository contains the German word embeddings from ConceptNet Numberbatch, which were extracted from the multilingual version.
## License and Attribution
These vectors are distributed under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/). In short, if you distribute a transformed or modified version of these vectors, you must release them under a compatible Share-Alike license and give due credit to Luminoso.
This data contains semantic vectors from ConceptNet Numberbatch, by Luminoso Technologies, Inc. You may redistribute or modify the data under the terms of the CC-By-SA 4.0 license.
If you build on this data, you should cite it. Here is a straightforward citation:
`Robyn Speer, Joshua Chin, and Catherine Havasi (2017). "ConceptNet 5.5: An Open Multilingual Graph of General Knowledge." In proceedings of AAAI 2017.`
The original file can be found at https://github.com/commonsense/conceptnet-numberbatch
|
hauptmann/conceptnet_numberbatch_19_08_ger
|
[
"region:us"
] |
2023-05-16T11:51:35+00:00
|
{}
|
2023-05-16T12:00:34+00:00
|
704208f0260a5ebc75a4e13bd5b022dbc0188543
|
# Dataset Card for "test-project-brand-story-gen-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mewsakul/test-project-brand-story-gen-test
|
[
"region:us"
] |
2023-05-16T11:56:05+00:00
|
{"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "Keyword", "dtype": "string"}, {"name": "Anger", "dtype": "float64"}, {"name": "Disgust", "dtype": "float64"}, {"name": "Fear", "dtype": "float64"}, {"name": "Joy", "dtype": "float64"}, {"name": "Neutral", "dtype": "float64"}, {"name": "Sadness", "dtype": "float64"}, {"name": "Surprise", "dtype": "float64"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 38603.015384615384, "num_examples": 58}, {"name": "validation", "num_bytes": 4658.984615384616, "num_examples": 7}], "download_size": 48750, "dataset_size": 43262.0}}
|
2023-05-16T11:56:07+00:00
|
9af4753651b72206cab13dae72643f177c97b9a8
|
rmk123/dataset_1
|
[
"license:mit",
"region:us"
] |
2023-05-16T11:56:08+00:00
|
{"license": "mit"}
|
2023-05-16T11:56:58+00:00
|
|
f2810df2ac091f70fc8b20b351cf73e6c07e9ceb
|
kobkrit/rd-taxqa
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-16T12:01:42+00:00
|
{"license": "apache-2.0"}
|
2023-05-16T12:01:43+00:00
|
|
1c787a1be8fb64f5c5395d1c977753c03f63eead
|
zeroshot/cybersecurity-corpus
|
[
"license:cc0-1.0",
"region:us"
] |
2023-05-16T12:09:07+00:00
|
{"license": "cc0-1.0"}
|
2023-05-16T12:09:40+00:00
|
|
231b5e5c31f2148ff7dab29a5a33d0957a1b1a49
|
What follows is research code. It is by no means optimized for speed, efficiency, or readability.
## Data loading, tokenizing and sharding
```python
import os
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.decomposition import TruncatedSVD
from tqdm.notebook import tqdm
from openTSNE import TSNE
import datashader as ds
import colorcet as cc
from dask.distributed import Client
import dask.dataframe as dd
import dask_ml
import dask.bag as db
from transformers import AutoTokenizer
from datasets import load_dataset
from datasets.utils.py_utils import convert_file_size_to_int
def batch_tokenize(batch):
return {'tokenized': [' '.join(e.tokens) for e in tokenizer(batch['text']).encodings]} # "text" column hard encoded
# The original viz used a subset of the ROOTS Corpus.
# More info on the entire dataset here: https://huggingface.co/bigscience-data
# And here: https://arxiv.org/abs/2303.03915
dset = load_dataset(..., split="train")
dset = dset.map(batch_tokenize, batched=True, batch_size=64, num_proc=28)
dset_name = "roots_subset"
max_shard_size = convert_file_size_to_int('300MB')
dataset_nbytes = dset.data.nbytes
num_shards = int(dataset_nbytes / max_shard_size) + 1
num_shards = max(num_shards, 1)
print(f"Sharding into {num_shards} files.")
os.makedirs(f"{dset_name}/tokenized", exist_ok=True)
for shard_index in tqdm(range(num_shards)):
shard = dset.shard(num_shards=num_shards, index=shard_index, contiguous=True)
shard.to_parquet(f"{dset_name}/tokenized/tokenized-{shard_index:03d}.parquet")
```
## Embedding
```python
client = Client() # To keep track of dask computation
client
df = dd.read_parquet(f'{dset_name}/tokenized/')
vect = dask_ml.feature_extraction.text.CountVectorizer(tokenizer=str.split,
token_pattern=None,
vocabulary=vocab)
tokenized_bag = df['tokenized'].to_bag()
X = vect.transform(tokenized_bag)
counts = X.compute()
client.shutdown()
tfidf_transformer = TfidfTransformer(sublinear_tf=True, norm="l2")
tfidf = tfidf_transformer.fit_transform(counts)
svd = TruncatedSVD(n_components=160)
X_svd = svd.fit_transform(tfidf)
tsne = TSNE(
perplexity=30, # not sure what param setting resulted in the plot
n_jobs=28,
random_state=42,
verbose=True,
)
tsne_embedding = tsne.fit(X)
```
## Plotting
```python
df = pd.DataFrame(data=tsne_embedding, columns=['x','y'])
agg = ds.Canvas(plot_height=600, plot_width=600).points(df, 'x', 'y')
img = ds.tf.shade(agg, cmap=cc.fire, how='eq_hist')
ds.tf.set_background(img, "black")
```

|
cakiki/roots-tsne-data
|
[
"license:apache-2.0",
"arxiv:2303.03915",
"region:us"
] |
2023-05-16T12:09:38+00:00
|
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "x", "dtype": "float64"}, {"name": "y", "dtype": "float64"}, {"name": "language", "dtype": "string"}, {"name": "corpus", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 247037602, "num_examples": 5785741}], "download_size": 112131877, "dataset_size": 247037602}}
|
2023-06-06T10:30:46+00:00
|
51f033a86309a2b3972619b5a9429fa191e5fc5f
|
harineemira/dataset
|
[
"license:mit",
"region:us"
] |
2023-05-16T12:11:59+00:00
|
{"license": "mit"}
|
2023-05-16T13:50:51+00:00
|
|
6c22e1e0d5b5eacaa355b9317e4f622978824de6
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_A_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_A_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T12:21:34+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 3710369, "num_examples": 25994}], "download_size": 1335813, "dataset_size": 3710369}}
|
2023-05-16T12:21:37+00:00
|
b8610a61293671356b4feb335debb7639d9ff0a2
|
Neidhardt/testing-1
|
[
"license:cc",
"region:us"
] |
2023-05-16T12:22:15+00:00
|
{"license": "cc"}
|
2023-05-16T13:15:14+00:00
|
|
f3bd4d2255efb73ae90848d854b6a636b7de45dc
|
# German Fasttext Vectors
This repository contains the German word embeddings from fasttext.
## License and Attribution
The word vectors are distributed under the `Creative Commons Attribution-Share-Alike License 3.0`
## References
If you use these word vectors, please cite the following paper:
```
@inproceedings{grave2018learning,
title={Learning Word Vectors for 157 Languages},
author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
The original file can be found at https://fasttext.cc/docs/en/crawl-vectors.html
|
hauptmann/fasttext_ger
|
[
"language:de",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-05-16T12:32:08+00:00
|
{"language": ["de"], "license": "cc-by-sa-3.0"}
|
2023-05-16T12:38:58+00:00
|
c27cbdc73394f03c91a96fb01546291ca103e613
|
# Dataset Card for "pixelsum_wiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zuzannad1/pixelsum_wiki
|
[
"region:us"
] |
2023-05-16T12:39:49+00:00
|
{"dataset_info": {"features": [{"name": "example", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7401808572, "num_examples": 6458670}], "download_size": 4591048930, "dataset_size": 7401808572}}
|
2023-09-13T10:42:49+00:00
|
f06c91610a2dbcb42c6b7db27d3ac26db0f8adce
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_T_A_D_PNP_FILTER_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_T_A_D_PNP_FILTER_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T13:01:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 3687274, "num_examples": 25994}], "download_size": 1321705, "dataset_size": 3687274}}
|
2023-05-16T13:01:15+00:00
|
3a2473c99c0b867b1d568ef3a21bcb51c0eaccdf
|
# German GloVe Word Embeddings
This repository contains the German word embeddings from `deepset.ai`, which are converted into a binary file make them easily accessible for the gensim library.
# Source
The original files can be found at https://www.deepset.ai/german-word-embeddings.
|
hauptmann/glove_ger
|
[
"language:de",
"license:unknown",
"region:us"
] |
2023-05-16T13:04:07+00:00
|
{"language": ["de"], "license": "unknown"}
|
2023-05-16T13:13:07+00:00
|
e0947516fbff80572774b6c431160e30f6cf8dd9
|
# Instruction Tuning with GPT 4 RedPajama-Chat
This dataset has been converted from the <a href="https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM" target="_new">Instruction-Tuning-with-GPT-4</a> dataset for the purpose of fine-tuning the <a href="https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1" target="_new">RedPajama-INCITE-Chat-3B-v1</a> model.
## About Instruction-Tuning-with-GPT-4
English Instruction-Following Data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
### Usage and License Notices
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
|
Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat
|
[
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] |
2023-05-16T13:12:28+00:00
|
{"language": ["en"], "license": "cc", "task_categories": ["question-answering"]}
|
2023-05-17T10:31:57+00:00
|
ef9c2975613de61d254d039e81b8cd16f50ede66
|
TrainThenObtain-ai/Utra-mini-GPT-4
|
[
"license:cc",
"region:us"
] |
2023-05-16T13:21:42+00:00
|
{"license": "cc"}
|
2023-05-16T13:22:09+00:00
|
|
b47ca93b11218f49d406e6960c6b679ebdf04c5d
|
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dtldhjh/github-issues
|
[
"region:us"
] |
2023-05-16T13:32:26+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "dtype": "string"}, {"name": "labels", "dtype": "string"}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "dtype": "string"}, {"name": "assignees", "dtype": "string"}, {"name": "milestone", "dtype": "string"}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "closed_at", "dtype": "string"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "float64"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "dtype": "string"}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "float64"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 24390715, "num_examples": 5000}], "download_size": 6142822, "dataset_size": 24390715}}
|
2023-05-16T13:32:36+00:00
|
10868464f52fb6abf496fa5017bcef1daaec2347
|
# SuperTweetEval
# Dataset Card for "super_tweeteval"
### Dataset Summary
This is the oficial repository for SuperTweetEval, a unified benchmark of 12 heterogeneous NLP tasks.
More details on the task and an evaluation of language models can be found on the [reference paper](https://arxiv.org/abs/2310.14757), published in EMNLP 2023 (Findings).
### Data Splits
All tasks provide custom training, validation and test splits.
| **task** | **dataset** | **load dataset** | **description** | **number of instances** |
|----------------------------|----------------|------------------|------------------------------------|-------------------------|
| Topic Classification | TweetTopic | tweet_topic | multi-label classification | 4,585 / 573 / 1,679 |
| NER | TweetNER7 | tweet_ner7 | sequence labeling | 4,616 / 576 / 2,807 |
| Question Answering | TweettQA | tweet_qa | generation | 9,489 / 1,086 / 1,203 |
| Question Generation | TweetQG | tweet_qg | generation | 9,489 / 1,086 / 1,203 |
| Intimacy Analysis | TweetIntimacy | tweet_intimacy | regression on a single text | 1,191 / 396 / 396 |
| Tweet Similarity | TweetSIM | tweet_similarity | regression on two texts | 450 / 100 / 450 |
| Meaning Shift Detection | TempoWIC | tempo_wic | binary classification on two texts | 1,427 / 395 / 1,472 |
| Hate Speech Detection | TweetHate | tweet_hate | multi-class classification | 5,019 / 716 / 1,433 |
| Emoji Classification | TweetEmoji100 | tweet_emoji | multi-class classification | 50,000 / 5,000 / 50,000 |
| Sentiment Classification | TweetSentiment | tweet_sentiment | ABSA on a five-pointscale | 26,632 / 4,000 / 12,379 |
| Name Entity Disambiguation | TweetNERD | tweet_nerd | binary classification | 20,164 / 4,100 / 20,075 |
| Emotion Classification | TweetEmotion | tweet_emotion | multi-label classification | 6,838 / 886 / 3,259 |
## Dataset Structure
### Data Fields
The data fields are unified among all splits.
In the following we present the information contained in each of the datasets.
#### tweet_topic
- `text`: a `string` feature.
- `gold_label_list`: a list of `string` feature.
- `date`: a `string` feature.
#### tweet_ner7
- `text`: a `string` feature.
- `text_tokenized`: a list of `string` feature.
- `gold_label_sequence`: a list of `string` feature.
- `date`: a `string` feature.
- `entities`: a list of `dictionary` feature containing `{"entity": "string", "type": "string"}`.
#### tweet_qa
- `text`: a `string` feature.
- `gold_label_str`: a `string` feature.
- `context`: a `string` feature.
#### tweet_qg
- `text`: a `string` feature.
- `gold_label_str`: a `string` feature.
- `context`: a `string` feature.
#### tweet_intimacy
- `text`: a `string` feature.
- `gold_score`: a `float` feature.
#### tweet_similarity
- `text_1`: a `string` feature.
- `text_2`: a `string` feature.
- `gold_score`: a `float` feature.
#### tempo_wic
- `gold_label_binary`: a `int` feature.
- `target`: a `string` feature.
- `text_1`: a `string` feature.
- `text_tokenized_1`: a list of `string` feature.
- `token_idx_1`: a `int` feature.
- `date_1`: a `string` feature.
- `text_2`: a `string` feature.
- `text_tokenized_2`: a list of `string` feature.
- `token_idx_2`: a `int` feature.
- `date_2`: a `string` feature.
#### tweet_hate
- `gold_label`: a `int` feature.
- `text`: a `string` feature.
#### tweet_emoji
- `gold_label`: a `int` feature.
- `text`: a `string` feature.
- `date`: a `string` feature.
#### tweet_sentiment
- `gold_label`: a `int` feature.
- `text`: a `string` feature.
- `target`: a `string` feature.
#### tweet_nerd
- `gold_label_binary`: a `int` feature.
- `target`: a `string` feature.
- `text`: a `string` feature.
- `definition`: a `string` feature.
- `text_start`: a `int` feature.
- `text_end`: a `int` feature.
- `date`: a `string` feature.
#### tweet_emotion
- `text`: a `string` feature.
- `gold_label_list`: a list of `string` feature.
## Evaluation metrics & Models
| **dataset** | **evaluation metric** | **gold label** | **model card** |
|-------------------|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
| TweetTopic | ```macro-F1``` | _arts\_&\_culture, business\_&\_entrepreneurs, celebrity\_&\_pop\_culture, <br />diaries\_&\_daily\_life, family, fashion\_&\_style, <br />film\_tv\_&\_video, fitness\_&\_health, food\_&\_dining, <br />gaming, learning\_&\_educational, music, <br />news\_&\_social\_concern, other\_hobbies, relationships, <br />science\_&\_technology, sports, travel\_&\_adventure, <br />youth\_&\_student\_life_ | [twitter-roberta-base-topic-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-topic) <br> [twitter-roberta-large-topic-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-topic) |
| TweetNER7 | ```macro-F1``` | _B-corporation, B-creative_work, B-event, <br />B-group, B-location, B-person, <br />B-product, I-corporation, I-creative_work, <br />I-event, I-group, I-location, <br />I-person, I-product, O_ | [twitter-roberta-base-ner7-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-ner7) <br> TBA |
| TweettQA | ```answer-F1``` | - | [flan-t5-small-tweet-qa (small)](https://huggingface.co/cardiffnlp/flan-t5-small-tweet-qa) <br> [flan-t5-base-tweet-qa (base)](https://huggingface.co/cardiffnlp/flan-t5-base-tweet-qa) |
| TweetQG | ```METEOR``` | - | [flan-t5-small-tweet-qg (small)](https://huggingface.co/cardiffnlp/flan-t5-small-tweet-qg) <br> [flan-t5-base-tweet-qg (base)](https://huggingface.co/cardiffnlp/flan-t5-base-tweet-qg) |
| TweetIntimacy | ```spearman correlation``` | _[1 - 5]_ | [twitter-roberta-base-intimacy-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-intimacy) <br> [twitter-roberta-large-intimacy-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-intimacy) |
| TweetSIM | ```spearman correlation``` | _[0 - 5]_ | [twitter-roberta-base-similarity-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-similarity) <br> [twitter-roberta-large-similarity-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-similarity) |
| TempoWIC | ```accuracy``` | _no, yes_ | [twitter-roberta-base-tempo-wic-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tempo-wic) <br> [twitter-roberta-large-tempo-wic-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tempo-wic) |
| TweetHate | ```combined-F1```<br /> ```(micro-F1 for hate/not-hate &```<br /> ``` macro-F1 for hate speech subclasses)``` | _hate_gender, hate_race, hate_sexuality, hate_religion, hate_origin, <br />hate_disability, hate_age, not_hate_ | [twitter-roberta-base-hate-latest-st (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-hate) <br> [twitter-roberta-large-hate-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-hate) |
| TweetEmoji100 | ```accuracy at top 5``` | _Full emoji list: ./data/tweet_emoji/map.txt_ | [twitter-roberta-base-emoji-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-emoji) <br> [twitter-roberta-large-emoji-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-emoji) |
| TweetSentiment | ```1 - MAE^M``` <br /> ```(MAE^M : Macro Averaged Mean Absolute Error)``` | _'strongly negative' , 'negative', 'negative or neutral', <br /> 'positive', 'strongly positive'_ | [twitter-roberta-base-topic-sentiment-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-sentiment) <br> [twitter-roberta-large-topic-sentiment-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-sentiment) |
| TweetNERD | ```accuracy``` | _no, yes_ | [twitter-roberta-base-nerd-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-nerd) <br> [twitter-roberta-large-nerd-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-nerd) |
| TweetEmotion | ```macro-F1``` | _anger, anticipation, disgust, fear, joy, love, optimism, <br />pessimism, sadness, surprise, trust_ | [twitter-roberta-base-emotion-latest (base)](https://huggingface.co/cardiffnlp/twitter-roberta-base-latest-tweet-emotion) <br> [twitter-roberta-large-emotion-latest (large)](https://huggingface.co/cardiffnlp/twitter-roberta-large-latest-tweet-emotion) |
## Citation Information
### Main reference paper
Please cite the [reference paper](https://arxiv.org/abs/2310.14757) if you use this benchmark.
```bibtex
@inproceedings{antypas2023supertweeteval,
title={SuperTweetEval: A Challenging, Unified and Heterogeneous Benchmark for Social Media NLP Research},
author={Dimosthenis Antypas and Asahi Ushio and Francesco Barbieri and Leonardo Neves and Kiamehr Rezaee and Luis Espinosa-Anke and Jiaxin Pei and Jose Camacho-Collados},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
year={2023}
}
```
### References of individual datasets
In addition to the main reference paper, please cite the individual task datasets included in SuperTweetEval if you use them.
- TweetTopic
```
@inproceedings{antypas-etal-2022-twitter,
title = "{T}witter Topic Classification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Silva, Vitor and
Neves, Leonardo and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.299",
pages = "3386--3400",
abstract = "Social media platforms host discussions about a wide variety of topics that arise everyday. Making sense of all the content and organising it into categories is an arduous task. A common way to deal with this issue is relying on topic modeling, but topics discovered using this technique are difficult to interpret and can differ from corpus to corpus. In this paper, we present a new task based on tweet topic classification and release two associated datasets. Given a wide range of topics covering the most important discussion points in social media, we provide training and testing data from recent time periods that can be used to evaluate tweet classification models. Moreover, we perform a quantitative evaluation and analysis of current general- and domain-specific language models on the task, which provide more insights on the challenges and nature of the task.",
}
```
- TweetNER7
```
@inproceedings{ushio-etal-2022-named,
title = "Named Entity Recognition in {T}witter: A Dataset and Analysis on Short-Term Temporal Shifts",
author = "Ushio, Asahi and
Barbieri, Francesco and
Sousa, Vitor and
Neves, Leonardo and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2022",
address = "Online only",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.aacl-main.25",
pages = "309--319",
abstract = "Recent progress in language model pre-training has led to important improvements in Named Entity Recognition (NER). Nonetheless, this progress has been mainly tested in well-formatted documents such as news, Wikipedia, or scientific articles. In social media the landscape is different, in which it adds another layer of complexity due to its noisy and dynamic nature. In this paper, we focus on NER in Twitter, one of the largest social media platforms, and construct a new NER dataset, TweetNER7, which contains seven entity types annotated over 11,382 tweets from September 2019 to August 2021. The dataset was constructed by carefully distributing the tweets over time and taking representative trends as a basis. Along with the dataset, we provide a set of language model baselines and perform an analysis on the language model performance on the task, especially analyzing the impact of different time periods. In particular, we focus on three important temporal aspects in our analysis: short-term degradation of NER models over time, strategies to fine-tune a language model over different periods, and self-labeling as an alternative to lack of recently-labeled data. TweetNER7 is released publicly (https://huggingface.co/datasets/tner/tweetner7) along with the models fine-tuned on it (NER models have been integrated into TweetNLP and can be found at https://github.com/asahi417/tner/tree/master/examples/tweetner7{\_}paper).",
}
```
- TweetQA
```
@inproceedings{xiong2019tweetqa,
title={TweetQA: A Social Media Focused Question Answering Dataset},
author={Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang},
booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
}
```
- TweetIntimacy
```
@misc{pei2023semeval,
title={SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis},
author={Jiaxin Pei and Vítor Silva and Maarten Bos and Yozon Liu and Leonardo Neves and David Jurgens and Francesco Barbieri},
year={2023},
eprint={2210.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
- Tweet Similarity
```
N/A
```
- TempoWiC
```
@inproceedings{loureiro-etal-2022-tempowic,
title = "{T}empo{W}i{C}: An Evaluation Benchmark for Detecting Meaning Shift in Social Media",
author = "Loureiro, Daniel and
D{'}Souza, Aminette and
Muhajab, Areej Nasser and
White, Isabella A. and
Wong, Gabriel and
Espinosa-Anke, Luis and
Neves, Leonardo and
Barbieri, Francesco and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.296",
pages = "3353--3359",
abstract = "Language evolves over time, and word meaning changes accordingly. This is especially true in social media, since its dynamic nature leads to faster semantic shifts, making it challenging for NLP models to deal with new content and trends. However, the number of datasets and models that specifically address the dynamic nature of these social platforms is scarce. To bridge this gap, we present TempoWiC, a new benchmark especially aimed at accelerating research in social media-based meaning shift. Our results show that TempoWiC is a challenging benchmark, even for recently-released language models specialized in social media.",
}
```
- TweetHate
```
@inproceedings{sachdeva-etal-2022-measuring,
title = "The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism",
author = "Sachdeva, Pratik and
Barreto, Renata and
Bacon, Geoff and
Sahn, Alexander and
von Vacano, Claudia and
Kennedy, Chris",
booktitle = "Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.nlperspectives-1.11",
pages = "83--94",
abstract = "We introduce the Measuring Hate Speech corpus, a dataset created to measure hate speech while adjusting for annotators{'} perspectives. It consists of 50,070 social media comments spanning YouTube, Reddit, and Twitter, labeled by 11,143 annotators recruited from Amazon Mechanical Turk. Each observation includes 10 ordinal labels: sentiment, disrespect, insult, attacking/defending, humiliation, inferior/superior status, dehumanization, violence, genocide, and a 3-valued hate speech benchmark label. The labels are aggregated using faceted Rasch measurement theory (RMT) into a continuous score that measures each comment{'}s location on a hate speech spectrum. The annotation experimental design assigned comments to multiple annotators in order to yield a linked network, allowing annotator disagreement (perspective) to be statistically summarized. Annotators{'} labeling strictness was estimated during the RMT scaling, projecting their perspective onto a linear measure that was adjusted for the hate speech score. Models that incorporate this annotator perspective parameter as an auxiliary input can generate label- and score-level predictions conditional on annotator perspective. The corpus includes the identity group targets of each comment (8 groups, 42 subgroups) and annotator demographics (6 groups, 40 subgroups), facilitating analyses of interactions between annotator- and comment-level identities, i.e. identity-related annotator perspective.",
}
```
- TweetEmoji
```
N/A
```
- TweetSentiment
```
@inproceedings{rosenthal-etal-2017-semeval,
title = "{S}em{E}val-2017 Task 4: Sentiment Analysis in {T}witter",
author = "Rosenthal, Sara and
Farra, Noura and
Nakov, Preslav",
booktitle = "Proceedings of the 11th International Workshop on Semantic Evaluation ({S}em{E}val-2017)",
month = aug,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S17-2088",
doi = "10.18653/v1/S17-2088",
pages = "502--518",
abstract = "This paper describes the fifth year of the Sentiment Analysis in Twitter task. SemEval-2017 Task 4 continues with a rerun of the subtasks of SemEval-2016 Task 4, which include identifying the overall sentiment of the tweet, sentiment towards a topic with classification on a two-point and on a five-point ordinal scale, and quantification of the distribution of sentiment towards a topic across a number of tweets: again on a two-point and on a five-point ordinal scale. Compared to 2016, we made two changes: (i) we introduced a new language, Arabic, for all subtasks, and (ii) we made available information from the profiles of the Twitter users who posted the target tweets. The task continues to be very popular, with a total of 48 teams participating this year.",
}
```
- TweetNERD
```
@article{mishra2022tweetnerd,
title={TweetNERD--End to End Entity Linking Benchmark for Tweets},
author={Mishra, Shubhanshu and Saini, Aman and Makki, Raheleh and Mehta, Sneha and Haghighi, Aria and Mollahosseini, Ali},
journal={arXiv preprint arXiv:2210.08129},
year={2022}
}
```
- TweetEmotion
```
@inproceedings{mohammad-etal-2018-semeval,
title = "{S}em{E}val-2018 Task 1: Affect in Tweets",
author = "Mohammad, Saif and
Bravo-Marquez, Felipe and
Salameh, Mohammad and
Kiritchenko, Svetlana",
booktitle = "Proceedings of the 12th International Workshop on Semantic Evaluation",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S18-1001",
doi = "10.18653/v1/S18-1001",
pages = "1--17",
abstract = "We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.",
}
```
|
cardiffnlp/super_tweeteval
|
[
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:other",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:abstractive-qa",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<50K",
"source_datasets:extended|other",
"language:en",
"license:unknown",
"super_tweet_eval",
"tweet_eval",
"natural language understanding",
"arxiv:2310.14757",
"arxiv:2210.01108",
"region:us"
] |
2023-05-16T13:33:16+00:00
|
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<50K"], "source_datasets": ["extended|other"], "task_categories": ["text-classification", "token-classification", "question-answering", "other"], "task_ids": ["topic-classification", "named-entity-recognition", "abstractive-qa"], "pretty_name": "SuperTweetEval", "tags": ["super_tweet_eval", "tweet_eval", "natural language understanding"]}
|
2023-11-21T06:18:42+00:00
|
340dc8770015e61cc52c61a60060ec02c3b1bc2d
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_D_PNP_FILTER_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_D_PNP_FILTER_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T13:38:24+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 3681015, "num_examples": 25994}], "download_size": 1314725, "dataset_size": 3681015}}
|
2023-05-16T13:38:27+00:00
|
ff4d1c99bb5897000ce199e4af2a326323eab88c
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_C_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_C_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T13:41:54+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 3695299, "num_examples": 25994}], "download_size": 1307097, "dataset_size": 3695299}}
|
2023-05-16T13:41:56+00:00
|
124f7f12edfa0f0a16751f5b3af5d9a83a454a42
|
OOPPEENN/Oyama_Mahiro
|
[
"license:gpl-3.0",
"region:us"
] |
2023-05-16T13:46:05+00:00
|
{"license": "gpl-3.0"}
|
2023-05-16T13:48:31+00:00
|
|
a60d349d32bc2da3e9279d81a8957fd4d931af32
|
为https://huggingface.co/TMZN/ChatGLM-wyw 服务的数据集之一。
# ChatGLM-wyw
一个读了文言文的ChatGLM
# 缘起
2023年5月16日,念叨了好久要让AI读文言文正式开工。<br>
# 感谢
一站式整合包(含chatglm模型):链接:https://pan.baidu.com/s/13GePNuh8ZP_DkMVRf5sHqw?pwd=2d2z
一站式整合包(不含模型):链接:https://pan.baidu.com/s/1lMfG34jerHO7aFjfdKTGUw?pwd=6y7j
数据集制作大佬链接:https://github.com/huang1332/finetune_dataset_maker
模型微调大佬链接:https://github.com/mymusise/ChatGLM-Tuning
ChatGLM官方链接:https://github.com/THUDM/ChatGLM-6B
|
TMZN/lunyu
|
[
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:zh",
"license:gpl-3.0",
"region:us"
] |
2023-05-16T14:05:49+00:00
|
{"language": ["zh"], "license": "gpl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "pretty_name": "lunyu"}
|
2023-05-17T06:38:09+00:00
|
26f79c5003864a182bd742b38365c9a8aea094a8
|
# Dataset Card for "xwikis-augmented-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
StivenLancheros/xwikis-augmented-processed
|
[
"region:us"
] |
2023-05-16T14:06:57+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "source_language", "dtype": "string"}, {"name": "target_language", "dtype": "string"}, {"name": "length", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 19711292439, "num_examples": 1448676}, {"name": "validation", "num_bytes": 504467120, "num_examples": 53554}, {"name": "test", "num_bytes": 312940355, "num_examples": 32289}], "download_size": 0, "dataset_size": 20528699914}}
|
2023-05-17T09:07:11+00:00
|
f1dce0cc16ed563e72768a5f33a59f0af4ac8324
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_D_PNP_FILTER_C_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_D_PNP_FILTER_C_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T14:17:02+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 3684225, "num_examples": 25994}], "download_size": 1310320, "dataset_size": 3684225}}
|
2023-05-16T14:17:06+00:00
|
5ae2158291aec4e412ada06a01860323abded81d
|
# Dataset Card for "SMM2-levels-final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
valashir/SMM2-levels-final
|
[
"region:us"
] |
2023-05-16T14:18:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "level", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "text", "dtype": "string"}, {"name": "text-baseline", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 18563428345.0, "num_examples": 202096}, {"name": "val", "num_bytes": 186900123.0, "num_examples": 2048}], "download_size": 1687277292, "dataset_size": 18750328468.0}}
|
2023-05-18T04:53:40+00:00
|
923197de7fc6cc02f1e541a863e11e3e204f5a9b
|
# Dataset Card for "fullalbumcovers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ramgus/fullalbumcovers
|
[
"region:us"
] |
2023-05-16T14:20:58+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "danceability", "dtype": "float64"}, {"name": "energy", "dtype": "float64"}, {"name": "key", "dtype": "int64"}, {"name": "loudness", "dtype": "float64"}, {"name": "mode", "dtype": "int64"}, {"name": "speechiness", "dtype": "float64"}, {"name": "acousticness", "dtype": "float64"}, {"name": "instrumentalness", "dtype": "float64"}, {"name": "liveness", "dtype": "float64"}, {"name": "valence", "dtype": "float64"}, {"name": "tempo", "dtype": "float64"}, {"name": "type", "dtype": "string"}, {"name": "uri", "dtype": "string"}, {"name": "track_href", "dtype": "string"}, {"name": "analysis_url", "dtype": "string"}, {"name": "duration_ms", "dtype": "int64"}, {"name": "time_signature", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 118456831.258, "num_examples": 1181}], "download_size": 92490662, "dataset_size": 118456831.258}}
|
2023-05-16T14:24:44+00:00
|
b8ad385d1b169325cb3193b98defb3c4d0e5a1a1
|
# Dataset Card for "SMM2-levels-all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
valashir/SMM2-levels-all
|
[
"region:us"
] |
2023-05-16T14:27:18+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "level", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "text", "dtype": "string"}, {"name": "text-baseline", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30754342973, "num_examples": 202096}, {"name": "val", "num_bytes": 308874924, "num_examples": 2048}], "download_size": 271196710, "dataset_size": 31063217897}}
|
2023-05-18T08:07:53+00:00
|
369e78ed2b42ff37362e5e77ea6c30f7cc06604a
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_A_D_PNP_FILTER_C_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_A_D_PNP_FILTER_C_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T14:58:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 3692199, "num_examples": 25994}], "download_size": 1322341, "dataset_size": 3692199}}
|
2023-05-16T14:58:11+00:00
|
df596e4fd1fa03c3c20950963fbd64237175bd20
|
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
narizhny/test
|
[
"task_categories:translation",
"language:en",
"bbb",
"region:us"
] |
2023-05-16T15:01:50+00:00
|
{"language": ["en"], "task_categories": ["translation", "mycategory123"], "dataset_info": {"features": [{"name": "Name", "dtype": "string"}, {"name": "Surname", "dtype": "string"}, {"name": "Address", "dtype": "string"}, {"name": "City", "dtype": "string"}, {"name": "State", "dtype": "string"}, {"name": "Postcode", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 413, "num_examples": 6}], "download_size": 3258, "dataset_size": 413}, "tags": ["bbb", "bbb"]}
|
2023-09-22T10:40:59+00:00
|
db1956220d387fd4c184335d815a70c0542388ce
|
**[Dataset Details]** This dataset is constructed by combining [LLaVA Visual Instruct 150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) and the [dataset](https://github.com/Vision-CAIR/MiniGPT-4/blob/main/dataset/README_2_STAGE.md) released by MiniGPT-4.
**[License]** Attribution-NonCommercial 4.0 International It should abide by the policy of OpenAI: [https://openai.com/policies/terms-of-use](https://openai.com/policies/terms-of-use)
## Intended use
**Primary intended uses**: The primary use of this dataset is research on large multimodal models and chatbots.
**Primary intended users**: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
openllmplayground/pandagpt_visual_instruction_dataset
|
[
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-05-16T15:10:43+00:00
|
{"license": "cc-by-nc-sa-4.0"}
|
2023-05-23T14:21:35+00:00
|
f574a95bec07b89f224fd2e15b172e25f9190d7d
|
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_T_D_PNP_FILTER_C_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_T_D_PNP_FILTER_C_Q_rices_ns_25994
|
[
"region:us"
] |
2023-05-16T15:34:42+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "true_label", "sequence": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_", "num_bytes": 3685479, "num_examples": 25994}], "download_size": 1317875, "dataset_size": 3685479}}
|
2023-05-16T15:34:45+00:00
|
236a5cb4106fc96a9dca73e3e0ed7c6f6afeb69c
|
# Dataset Card for "OIDyyyyoisfnmsfhsognsdhiogosdnoghros"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Axel578/OIDyyyyoisfnmsfhsognsdhiogosdnoghros
|
[
"region:us"
] |
2023-05-16T15:48:55+00:00
|
{"dataset_info": {"features": [{"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3016874623, "num_examples": 1413831}], "download_size": 437484293, "dataset_size": 3016874623}}
|
2023-05-16T15:54:04+00:00
|
edfe629d72325b730ac41e2bc7ff5d9e27151a37
|
RandomCatLover/logs_for_demo_nlp
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-16T15:58:32+00:00
|
{"license": "apache-2.0"}
|
2023-05-16T16:14:00+00:00
|
|
9f62f4a9788f030e52fe85a816b338f507c48187
|
# Dataset Card for "deberta-v3-base-injection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JasperLS/prompt-injections
|
[
"region:us"
] |
2023-05-16T16:16:15+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 71720, "num_examples": 546}, {"name": "test", "num_bytes": 15981, "num_examples": 116}], "download_size": 51215, "dataset_size": 87701}}
|
2023-05-16T16:16:21+00:00
|
6de0c073544a40b03da0d95019ab328d9e4d9331
|
# Dataset Card for "flores200_devtest_mt5-600m-flores200-scaffold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/flores200_devtest_mt5-600m-flores200-scaffold
|
[
"region:us"
] |
2023-05-16T16:27:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_lang", "dtype": "string"}, {"name": "target_lang", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "chrf_unreduced", "dtype": "string"}], "splits": [{"name": "devtest", "num_bytes": 747261214, "num_examples": 1000000}], "download_size": 524909314, "dataset_size": 747261214}}
|
2023-05-17T01:33:37+00:00
|
6ce49fbc542ba99502e730f76a7ac89385a8273f
|
# Dataset Card for "b5848c29"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b5848c29
|
[
"region:us"
] |
2023-05-16T16:31:07+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 176, "num_examples": 10}], "download_size": 1337, "dataset_size": 176}}
|
2023-05-16T16:31:08+00:00
|
8387a7d5ebae5569110d82f1aedc2f7139103adf
|
# Dataset Card for "a343deba"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/a343deba
|
[
"region:us"
] |
2023-05-16T16:31:08+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 176, "num_examples": 10}], "download_size": 1337, "dataset_size": 176}}
|
2023-05-16T16:31:09+00:00
|
ae9012b74c6d85ea2d90ddff07e3ff51ea2d404a
|
# AutoTrain Dataset for project: doodles-30
## Dataset Description
This dataset has been automatically processed by AutoTrain for project doodles-30.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 RGB PIL image>",
"target": 1
},
{
"image": "<256x256 RGB PIL image>",
"target": 3
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['ant', 'bear', 'bee', 'bird', 'cat', 'dog', 'dolphin', 'elephant', 'giraffe', 'horse', 'lion', 'mosquito', 'tiger', 'whale'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 336 |
| valid | 84 |
|
albanc/autotrain-data-doodles-30
|
[
"task_categories:image-classification",
"license:openrail",
"region:us"
] |
2023-05-16T16:48:29+00:00
|
{"license": "openrail", "task_categories": ["image-classification"]}
|
2023-05-16T18:03:35+00:00
|
532e041149ce8b1a7e5e7cd7e4487102bbb5bf78
|
# Dataset Card for "flores200_devtest_translation_pairs_mt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/flores200_devtest_translation_pairs_mt5
|
[
"region:us"
] |
2023-05-16T16:54:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "devtest", "num_bytes": 32718326669, "num_examples": 41908944}], "download_size": 8951378170, "dataset_size": 32718326669}}
|
2023-05-16T17:36:07+00:00
|
3eeb2fc0f1511b474e5ffd5dfdb8777f4af15c86
|
# Dataset Card for "preproc-fashion-products"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
agestau/preproc-fashion-products
|
[
"region:us"
] |
2023-05-16T16:58:58+00:00
|
{"dataset_info": {"features": [{"name": "subCategory", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 464236487.875, "num_examples": 36145}], "download_size": 223972645, "dataset_size": 464236487.875}}
|
2023-05-16T17:01:07+00:00
|
098452473699e0e9643d9ee5d9927b97ab2bba8f
|
cimadure/snow_removal_transactions_in_montreal
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-05-16T17:32:53+00:00
|
{"license": "cc-by-4.0"}
|
2023-05-16T17:40:02+00:00
|
|
f95ee99905dbd235a6708a7fc76e2323b270b0d5
|
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
escottyp/celeb-identities
|
[
"region:us"
] |
2023-05-16T17:47:05+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "christine_ko", "1": "hunter_schafer", "2": "jean_bertrand_aristide", "3": "malala", "4": "paul_farmer", "5": "sho_madjozi", "6": "steph_curry"}}}}], "splits": [{"name": "train", "num_bytes": 2890692.0, "num_examples": 22}], "download_size": 2892812, "dataset_size": 2890692.0}}
|
2023-05-16T17:47:11+00:00
|
8ea5e3d2f33718645b565b3d8cc9a57574590616
|
# Dataset Card for "english_to_pirate"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vincha77/english_to_pirate
|
[
"region:us"
] |
2023-05-16T17:52:51+00:00
|
{"dataset_info": {"features": [{"name": "english", "dtype": "string"}, {"name": "pirate", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 533.6, "num_examples": 8}, {"name": "test", "num_bytes": 66.7, "num_examples": 1}, {"name": "valid", "num_bytes": 66.7, "num_examples": 1}], "download_size": 5162, "dataset_size": 667.0000000000001}}
|
2023-05-16T17:52:55+00:00
|
0da0bc19671cf004c5fa0427230fcb35ff2b478f
|
hbattu/huberman-youtube-metadata
|
[
"license:mit",
"region:us"
] |
2023-05-16T17:58:12+00:00
|
{"license": "mit"}
|
2023-05-16T17:58:29+00:00
|
|
8f02fd0eeaf32ded54aa0777e047191910cdd258
|
# Dataset Card for "wikipedia_20230501_fi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Finnish-NLP/wikipedia_20230501_fi
|
[
"region:us"
] |
2023-05-16T18:09:10+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1121312097, "num_examples": 551779}], "download_size": 665587606, "dataset_size": 1121312097}}
|
2023-05-16T18:09:49+00:00
|
467dcdd2cd31f9b5e8625491f3bdf7af90943a8d
|
# Dataset Card for AttrScore
- Repository: https://github.com/OSU-NLP-Group/AttrScore
- Paper: [Automatic Evaluation of Attribution by Large Language Models] (https://arxiv.org/pdf/2305.06311.pdf)
- Point of Contact: [Xiang Yue](mailto:[email protected])
### Citation Information
```bib
@article{yue2023automatic,
title={Automatic Evaluation of Attribution by Large Language Models},
author={Yue, Xiang and Wang, Boshi and Zhang, Kai and Chen, Ziru and Su, Yu and Sun, Huan},
journal={arXiv preprint arXiv:2305.06311},
year={2023}
}
```
### What's New?
In the current version 0.2, we fixed some wrong annotated labels in the AttrEval-GenSearch dataset. (Commit: [4da294f](https://huggingface.co/datasets/osunlp/AttrScore/commit/4da294f5e488086492e117b405fc8ea95717ec3b))
### Dataset Summary
A recent focus of large language model (LLM) development, as exemplified by generative search engines, is to incorporate external references to generate and support its claims. However, evaluating the attribution, i.e., verifying whether the generated statement is indeed fully supported by the cited reference, remains an open problem.
We construct this dataset, which contains both training and test data for the evaluation of attribution. The training data are repurposed from related tasks, such as question answering, fact-checking, natural language inference, and summarization. The test data, cotains a set simulated from QA datasets and a set manually curated from a generative search engine, New Bing.
## Dataset Structure
### Data Instances
{
"query": "",
"answer": "Bastedo cared for all the animals that inhabit the earth.",
"reference": "Alexandra Lendon Bastedo (9 March 1946 - 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series \"The Champions\". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.",
"label": "Extrapolatory",
"dataset": "anli"
}
{
"query": The persian gulf war began when iraq invaded what country?
"answer": The Persian Gulf War began when Iraq invaded Kuwait.
"reference": First Iraq War or Iraq War, before the term \"Iraq War\" became identified instead with the 2003 Iraq War. The Iraqi Army's occupation of Kuwait that began 2 August 1990 was met with international condemnation and brought immediate economic sanctions against Iraq by members of the UN Security Council. Together with the UK's prime minister Margaret Thatcher - who had resisted the invasion by Argentina of the Falkland Islands a decade earlier - George H. W. Bush deployed US forces into Saudi Arabia, and urged other countries to send their own forces to the scene. An array of nations joined the coalition, forming the",
"label": "Attributable",
"dataset": "NaturalQuestions"
}
### Data Fields
- "query": query (may be empty)
- "answer": answer to the query
- "reference": a document or a paragraph
- "label": whether the reference can support the answer to the query ("attributable", "extrapolatory", "contradictory")
- "dataset": the original dataset of the data instance
|
osunlp/AttrScore
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"arxiv:2305.06311",
"region:us"
] |
2023-05-16T18:09:52+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "AttrScore"}
|
2023-06-29T00:56:48+00:00
|
d28c8863c8a56c04ba4e3927cbf005e5dcb7b6b1
|
dharcness/helltaker
|
[
"license:afl-3.0",
"region:us"
] |
2023-05-16T18:44:46+00:00
|
{"license": "afl-3.0"}
|
2023-05-16T19:36:08+00:00
|
|
9f5f6eac1e788e28d12babc03db941c2d3bfa942
|
# ShareGPT-Processed
The [RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K) dataset, converted to Markdown and labeled with the language used.
## Acknowledgements
* [vinta/pangu.js](https://github.com/vinta/pangu.js) — To insert whitespace between CJK (Chinese, Japanese, Korean) and half-width characters (alphabetical letters, numerical digits and symbols).
* [matthewwithanm/python-markdownify](https://github.com/matthewwithanm/python-markdownify) — Provides a starting point to convert HTML to Markdown.
* [BYVoid/OpenCC](https://github.com/BYVoid/OpenCC) — Conversions between Traditional Chinese and Simplified Chinese.
* [aboSamoor/polyglot](https://github.com/aboSamoor/polyglot) — For language detection.
* [RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K) — The data source.
## Data Structure
Every row in this dataset is a conversation session, consisting of:
* `id` — An ID.
* `lang` — The language code of the conversation, detected with `polyglot`.
* `conversations` — A list of messages, with each message consisting of:
* `from` — `human` or `gpt`.
* `value` — The original message content. HTML.
* `markdown` — The message converted into Markdown format.
* `opencc_converted_markdown` — The Markdown-format message converted into Traditional Chinese using `opencc`. Will be `null` for non-Chinese conversations.
Example:
```json
{
"id": "AZirHBD",
"conversations": [
{
"from": "human",
"markdown": "請問英文過去完成式的用法?",
"opencc_converted_markdown": "請問英文過去完成式的用法?",
"value": "請問英文過去完成式的用法?"
},
{
"from": "gpt",
"markdown": "英文的過去完成式是用來表示一個已經發生的動作或事件,發生在過去的某個時間,並且在這個時間之前,也發生了另一個動作或事件。在語法上,過去完成式是由助動詞 \"had\" 和過去分詞形式的動詞組成。下面是一些常見的用法:\n\n1. 表示一個已經完成的動作在另一個過去的時間點之前發生。\n\n * e.g. By the time I arrived, she had already left.\n2. 表示在過去某個時間之前就已經完成的動作。\n\n * e.g. I had studied English for five years before I moved to the United States.\n3. 表示兩個已經完成的動作中,其中一個發生在另一個之前。\n\n * e.g. After I had finished my homework, I watched TV.\n4. 用來描述一個條件或假設在過去的時間點之前已經發生。\n\n * e.g. If she had known about the party, she would have come.\n5. 用來描述過去已經完成的行動對現在的影響。\n\n * e.g. He had worked as a lawyer for many years, so he knew a lot about the law.\n\n在使用過去完成式時,需要注意時態的一致性和上下文的清晰性,以確保表達的意思清晰明確。",
"opencc_converted_markdown": "英文的過去完成式是用來表示一個已經發生的動作或事件,發生在過去的某個時間,並且在這個時間之前,也發生了另一個動作或事件。在語法上,過去完成式是由助動詞 \"had\" 和過去分詞形式的動片語成。下面是一些常見的用法:\n\n1. 表示一個已經完成的動作在另一個過去的時間點之前發生。\n\n * e.g. By the time I arrived, she had already left.\n2. 表示在過去某個時間之前就已經完成的動作。\n\n * e.g. I had studied English for five years before I moved to the United States.\n3. 表示兩個已經完成的動作中,其中一個發生在另一個之前。\n\n * e.g. After I had finished my homework, I watched TV.\n4. 用來描述一個條件或假設在過去的時間點之前已經發生。\n\n * e.g. If she had known about the party, she would have come.\n5. 用來描述過去已經完成的行動對現在的影響。\n\n * e.g. He had worked as a lawyer for many years, so he knew a lot about the law.\n\n在使用過去完成式時,需要注意時態的一致性和上下文的清晰性,以確保表達的意思清晰明確。",
"value": "<div class=\"markdown prose w-full break-words dark:prose-invert light\"><p>英文的過去完成式是用來表示一個已經發生的動作或事件,發生在過去的某個時間,並且在這個時間之前,也發生了另一個動作或事件。在語法上,過去完成式是由助動詞 \"had\" 和過去分詞形式的動詞組成。下面是一些常見的用法:</p><ol><li><p>表示一個已經完成的動作在另一個過去的時間點之前發生。</p><ul><li>e.g. By the time I arrived, she had already left.</li></ul></li><li><p>表示在過去某個時間之前就已經完成的動作。</p><ul><li>e.g. I had studied English for five years before I moved to the United States.</li></ul></li><li><p>表示兩個已經完成的動作中,其中一個發生在另一個之前。</p><ul><li>e.g. After I had finished my homework, I watched TV.</li></ul></li><li><p>用來描述一個條件或假設在過去的時間點之前已經發生。</p><ul><li>e.g. If she had known about the party, she would have come.</li></ul></li><li><p>用來描述過去已經完成的行動對現在的影響。</p><ul><li>e.g. He had worked as a lawyer for many years, so he knew a lot about the law.</li></ul></li></ol><p>在使用過去完成式時,需要注意時態的一致性和上下文的清晰性,以確保表達的意思清晰明確。</p></div>"
},
{
"from": "human",
"markdown": "英文現在式的用法",
"opencc_converted_markdown": "英文現在式的用法",
"value": "英文現在式的用法"
},
{
"from": "gpt",
"markdown": "英文的現在式表示的是現在正在進行或習慣性發生的動作,也可以表示客觀事實或普遍真理。下面是一些常見的用法:\n\n1. 表示現在正在進行的動作。\n\n * e.g. I am typing on my computer right now.\n2. 表示習慣性的行為或狀態。\n\n * e.g. I usually drink coffee in the morning.\n3. 表示事實、真相或存在的狀態。\n\n * e.g. The earth orbits around the sun.\n4. 用於引述、註解或解釋現在發生的事情。\n\n * e.g. He says he is feeling better today.\n5. 表示未來已經計劃好的事件或行動。\n\n * e.g. I am leaving for Paris tomorrow.\n\n在使用現在式時,需要注意動詞的時態和主語的人稱和數,以確保表達的意思清晰明確。",
"opencc_converted_markdown": "英文的現在式表示的是現在正在進行或習慣性發生的動作,也可以表示客觀事實或普遍真理。下面是一些常見的用法:\n\n1. 表示現在正在進行的動作。\n\n * e.g. I am typing on my computer right now.\n2. 表示習慣性的行為或狀態。\n\n * e.g. I usually drink coffee in the morning.\n3. 表示事實、真相或存在的狀態。\n\n * e.g. The earth orbits around the sun.\n4. 用於引述、註解或解釋現在發生的事情。\n\n * e.g. He says he is feeling better today.\n5. 表示未來已經計劃好的事件或行動。\n\n * e.g. I am leaving for Paris tomorrow.\n\n在使用現在式時,需要注意動詞的時態和主語的人稱和數,以確保表達的意思清晰明確。",
"value": "<div class=\"markdown prose w-full break-words dark:prose-invert light\"><p>英文的現在式表示的是現在正在進行或習慣性發生的動作,也可以表示客觀事實或普遍真理。下面是一些常見的用法:</p><ol><li><p>表示現在正在進行的動作。</p><ul><li>e.g. I am typing on my computer right now.</li></ul></li><li><p>表示習慣性的行為或狀態。</p><ul><li>e.g. I usually drink coffee in the morning.</li></ul></li><li><p>表示事實、真相或存在的狀態。</p><ul><li>e.g. The earth orbits around the sun.</li></ul></li><li><p>用於引述、註解或解釋現在發生的事情。</p><ul><li>e.g. He says he is feeling better today.</li></ul></li><li><p>表示未來已經計劃好的事件或行動。</p><ul><li>e.g. I am leaving for Paris tomorrow.</li></ul></li></ol><p>在使用現在式時,需要注意動詞的時態和主語的人稱和數,以確保表達的意思清晰明確。</p></div>"
}
],
"lang": "zh_Hant"
}
```
## Languages
```json
{
"en": 63940,
"zh_Hant": 3201,
"zh": 6394,
"es": 2080,
"ja": 1525,
"unknown": 4212,
"pt": 778,
"it": 512,
"ko": 2529,
"nl": 195,
"ro": 63,
"fr": 1835,
"vi": 245,
"de": 800,
"cs": 172,
"iw": 123,
"ru": 448,
"id": 369,
"pl": 194,
"no": 65,
"ar": 78,
"tr": 150,
"da": 68,
"sa": 10,
"sv": 87,
"ia": 2,
"fo": 7,
"sq": 4,
"el": 14,
"fa": 22,
"bs": 13,
"rm": 2,
"ms": 45,
"ca": 14,
"hr": 26,
"sk": 23,
"uk": 106,
"th": 33,
"fi": 32,
"tlh": 1,
"hu": 46,
"gl": 21,
"bg": 4,
"sr": 18,
"is": 2,
"ts": 9,
"la": 27,
"sl": 6,
"uz": 5,
"qu": 3,
"ay": 7,
"mi": 3,
"ceb": 1,
"gu": 1,
"oc": 8,
"aa": 2,
"haw": 6,
"xh": 4,
"ny": 4,
"hmn": 2,
"tk": 2,
"sco": 6,
"zzp": 4,
"so": 2,
"mg": 1,
"to": 1,
"mk": 1,
"ha": 2,
"ur": 2,
"nn": 4,
"lv": 2,
"mt": 2,
"gn": 2,
"et": 2,
"ie": 3,
"tl": 3,
"lb": 4,
"bn": 1,
"rw": 4,
"bi": 1,
"ga": 1,
"war": 1,
"sw": 2,
"eo": 2,
"eu": 2,
"lt": 1,
"af": 2,
"ht": 3,
"fj": 1,
"st": 1,
"na": 1,
"sd": 1,
"fy": 1,
"jw": 1
}
```
Detected with `polyglot`.
|
zetavg/ShareGPT-Processed
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"language:es",
"language:ja",
"language:fr",
"license:cc0-1.0",
"conversation",
"rlhf",
"chatgpt",
"gpt-3.5",
"region:us"
] |
2023-05-16T18:50:04+00:00
|
{"language": ["en", "zh", "es", "ja", "fr"], "license": "cc0-1.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "markdown", "dtype": "string"}, {"name": "opencc_converted_markdown", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2772036550, "num_examples": 90665}], "download_size": 1075261393, "dataset_size": 2772036550}, "tags": ["conversation", "rlhf", "chatgpt", "gpt-3.5"]}
|
2023-05-21T02:50:14+00:00
|
773e420d841aed083a82613345419ed354570633
|
# Dataset Card for "wikipedia_20230501_fi_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Finnish-NLP/wikipedia_20230501_fi_cleaned
|
[
"region:us"
] |
2023-05-16T18:59:34+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "perplexity_kenlm", "dtype": "int64"}, {"name": "label_identity_attack", "dtype": "float64"}, {"name": "label_insult", "dtype": "float64"}, {"name": "label_obscene", "dtype": "float64"}, {"name": "label_severe_toxicity", "dtype": "float64"}, {"name": "label_threat", "dtype": "float64"}, {"name": "label_toxicity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 950578253, "num_examples": 410684}], "download_size": 557777488, "dataset_size": 950578253}}
|
2023-05-18T13:49:13+00:00
|
81023ef349187e264502864d7ac472a0a344e358
|
# Dataset Card for "oscar_2301_fi_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Finnish-NLP/oscar_2301_fi_cleaned
|
[
"region:us"
] |
2023-05-16T19:04:12+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "warc_headers", "struct": [{"name": "warc-record-id", "dtype": "string"}, {"name": "warc-date", "dtype": "string"}, {"name": "content-type", "dtype": "string"}, {"name": "content-length", "dtype": "int32"}, {"name": "warc-type", "dtype": "string"}, {"name": "warc-identified-content-language", "dtype": "string"}, {"name": "warc-refers-to", "dtype": "string"}, {"name": "warc-target-uri", "dtype": "string"}, {"name": "warc-block-digest", "dtype": "string"}]}, {"name": "identification", "struct": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}, {"name": "harmful_pp", "dtype": "float32"}, {"name": "tlsh", "dtype": "string"}, {"name": "quality_warnings", "sequence": "string"}, {"name": "categories", "sequence": "string"}, {"name": "sentence_identifications", "list": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}]}, {"name": "perplexity_kenlm", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "label_identity_attack", "dtype": "float64"}, {"name": "label_insult", "dtype": "float64"}, {"name": "label_obscene", "dtype": "float64"}, {"name": "label_severe_toxicity", "dtype": "float64"}, {"name": "label_threat", "dtype": "float64"}, {"name": "label_toxicity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 40449678552, "num_examples": 5225577}], "download_size": 2848314172, "dataset_size": 40449678552}}
|
2023-05-19T15:06:09+00:00
|
fa320dec022d370e56d21d6ba928c68924fc9fd4
|
# Dataset Card for "cs375_cv11_mandarin_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alansun25/cs375_cv11_mandarin_train
|
[
"region:us"
] |
2023-05-16T20:22:29+00:00
|
{"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3188919854, "num_examples": 5000}], "download_size": 2393082398, "dataset_size": 3188919854}}
|
2023-05-16T20:27:11+00:00
|
369d9eceb1edd90883e8006f1fe5f0a14d378667
|
# IVA Swift GitHub Code Dataset
## Dataset Description
This is the curated train split of IVA Swift dataset extracted from GitHub.
It contains curated Swift files gathered with the purpose to train a code generation model.
The dataset consists of 320000 Swift code files from GitHub.
[Here is the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean) and
[here is the raw dataset](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint).
### How to use it
To download the full dataset:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-swift-codeint-clean', split='train')
```
## Data Structure
### Data Fields
|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|content|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
|hash|string|Hash of content field.|
|line_mean|number|Mean line length of the content.
|line_max|number|Max line length of the content.
|alpha_frac|number|Fraction between mean and max line length of content.
|ratio|number|Character/token ratio of the file with tokenizer.
|autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file.
|config_or_test|boolean|True if the content is a configuration file or a unit test.
|has_no_keywords|boolean|True if a file has none of the keywords for Swift Programming Language.
|has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times.
### Instance
```json
{
"repo_name":"...",
"path":".../BorderedButton.swift",
"copies":"2",
"size":"2649",
"content":"...",
"license":"mit",
"hash":"db1587fd117e9a835f58cf8203d8bf05",
"line_mean":29.1136363636,
"line_max":87,
"alpha_frac":0.6700641752,
"ratio":5.298,
"autogenerated":false,
"config_or_test":false,
"has_no_keywords":false,
"has_few_assignments":false
}
```
## Languages
The dataset contains only Swift files.
```json
{
"Swift": [".swift"]
}
```
## Licenses
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
```json
{
"agpl-3.0":1415,
"apache-2.0":71451,
"artistic-2.0":169,
"bsd-2-clause":2628,
"bsd-3-clause":5492,
"cc0-1.0":1176,
"epl-1.0":498,
"gpl-2.0":7846,
"gpl-3.0":15716,
"isc":676,
"lgpl-2.1":932,
"lgpl-3.0":2553,
"mit":201134,
"mpl-2.0":6846,
"unlicense":1468
}
```
## Dataset Statistics
```json
{
"Total size": "~453 MB",
"Number of files": 320000,
"Number of files under 500 bytes": 3116,
"Average file size in bytes": 5940,
}
```
## Curation Process
See [the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean) for mode details.
## Data Splits
The dataset only contains a train split focused only on training data. For validation and unspliced versions, please check the following links:
* Clean Version Unsliced: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-valid
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
|
mvasiliniuc/iva-swift-codeint-clean-train
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:10K<n<100K",
"language:code",
"license:other",
"code, swift, native iOS development, curated, training",
"region:us"
] |
2023-05-16T20:27:01+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["code"], "license": "other", "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "iva-swift-codeint-clean", "tags": ["code, swift, native iOS development, curated, training"]}
|
2023-06-15T13:48:56+00:00
|
9bd40383e2b3b01eabb0df149da0ae872fe91611
|
# Dataset Card for "cs375_cv11_mandarin_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alansun25/cs375_cv11_mandarin_test
|
[
"region:us"
] |
2023-05-16T20:27:11+00:00
|
{"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 761405859, "num_examples": 1000}], "download_size": 566068077, "dataset_size": 761405859}}
|
2023-05-16T20:28:17+00:00
|
6b3ba48b2c46bacb6520daea0a837d6082d28c48
|
# IVA Swift GitHub Code Dataset - Curated - Validation
## Dataset Description
This is the curated valid split of IVA Swift dataset extracted from GitHub.
It contains curated Swift files gathered with the purpose to train & validate a code generation model.
The dataset only contains a valid split.
For validation and unspliced versions, please check the following links:
* Clean Version Unsliced: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-train
Information about dataset structure, data involved, licenses, and standard Dataset Card information is available that applies to this dataset also.
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
|
mvasiliniuc/iva-swift-codeint-clean-valid
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:10K<n<100K",
"language:code",
"license:other",
"code, ios, native iOS development, curated, validation",
"region:us"
] |
2023-05-16T20:27:11+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["code"], "license": "other", "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "iva-swift-codeint-clean-valid", "tags": ["code, ios, native iOS development, curated, validation"]}
|
2023-06-15T13:49:32+00:00
|
b3b42f030afd9ee352e3d3dae332a8adc284d5ff
|
# AToMiC Prebuilt Indexes
## Example Usage:
### Reproduction
Toolkits:
https://github.com/TREC-AToMiC/AToMiC/tree/main/examples/dense_retriever_baselines
```bash
# Skip the encode and index steps, search with the prebuilt indexes and topics directly
python search.py \
--topics topics/openai.clip-vit-base-patch32.text.validation \
--index indexes/openai.clip-vit-base-patch32.image.faiss.flat \
--hits 1000 \
--output runs/run.openai.clip-vit-base-patch32.validation.t2i.large.trec
python search.py \
--topics topics/openai.clip-vit-base-patch32.image.validation \
--index indexes/openai.clip-vit-base-patch32.text.faiss.flat \
--hits 1000 \
--output runs/run.openai.clip-vit-base-patch32.validation.i2t.large.trec
```
### Explore AToMiC datasets
```python
import torch
from pathlib import Path
from datasets import load_dataset
from transformers import AutoModel, AutoProcessor
INDEX_DIR='indexes'
INDEX_NAME='openai.clip-vit-base-patch32.image.faiss.flat'
QUERY = 'Elizabeth II'
images = load_dataset('TREC-AToMiC/AToMiC-Images-v0.2', split='train')
images.load_faiss_index(index_name=INDEX_NAME, file=Path(INDEX_DIR, INDEX_NAME, 'index'))
model = AutoModel.from_pretrained('openai/clip-vit-base-patch32')
processor = AutoProcessor.from_pretrained('openai/clip-vit-base-patch32')
# prebuilt indexes contain L2-normalized vectors
with torch.no_grad():
q_embedding = model.get_text_features(**processor(text=query, return_tensors="pt"))
q_embedding = torch.nn.functional.normalize(q_embedding, dim=-1).detach().numpy()
scores, retrieved = images.get_nearest_examples(index_name, q_embedding, k=10)
```
|
TREC-AToMiC/AToMiC-Baselines
|
[
"region:us"
] |
2023-05-16T20:27:15+00:00
|
{}
|
2023-10-22T21:10:13+00:00
|
5558719691bad60c8349dd07056e7aa2ccf0db9f
|
# Dataset Card for "cs375_cv11_mandarin_dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alansun25/cs375_cv11_mandarin_dev
|
[
"region:us"
] |
2023-05-16T20:28:17+00:00
|
{"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 721852593, "num_examples": 1000}], "download_size": 544854836, "dataset_size": 721852593}}
|
2023-05-16T20:29:20+00:00
|
0c36df9df2a1f8ca53c69f97ea00174e38e2fbdb
|
from datasets import load_dataset
dataset = load_dataset("GEM/mlb_data_to_text")
|
Dadams2244/Test
|
[
"region:us"
] |
2023-05-16T20:38:07+00:00
|
{}
|
2023-05-16T20:38:25+00:00
|
c32f196dfd36b96c7e42013cfd8f7e1c572380f2
|
emanuelebezzecchi/trailerShotScale
|
[
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"art",
"region:us"
] |
2023-05-16T20:46:07+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "tags": ["art"]}
|
2023-05-17T07:33:37+00:00
|
|
5391cceb073e09ce90976daa48cf074891715a9f
|
This dataset was created to test two different things:
First, check LLM's capabilities of augmenting data in a coherent way.
Second, create a dataset to finetune LLMs for the QA task.
The dataset contains the frequently asked questions and their answers of a made-up online fashion marketplace called: Nels Marketplace.
|
nelson2424/FAQ_NelsMarketplace
|
[
"task_categories:question-answering",
"language:en",
"license:mit",
"finance",
"region:us"
] |
2023-05-16T20:57:08+00:00
|
{"language": ["en"], "license": "mit", "task_categories": ["question-answering"], "tags": ["finance"], "dataset_info": {"features": [{"name": "Instruction", "dtype": "string"}, {"name": "Question", "dtype": "string"}, {"name": "Context/Answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121719, "num_examples": 84}], "download_size": 25676, "dataset_size": 121719}}
|
2023-05-16T22:53:23+00:00
|
12296af4a729f726cfdd59000750fca3bfa38e64
|
benlehrburger/modern-architecture
|
[
"license:unknown",
"region:us"
] |
2023-05-16T21:36:33+00:00
|
{"license": "unknown"}
|
2023-05-31T20:10:22+00:00
|
|
2a687b8b1da3529889202b550624322b388fd17f
|
# Dataset Card for "covidqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mrkhanna/covidqa
|
[
"region:us"
] |
2023-05-16T21:37:04+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "question", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48653509, "num_examples": 1417}, {"name": "test", "num_bytes": 11608421, "num_examples": 375}, {"name": "validation", "num_bytes": 4314598, "num_examples": 203}], "download_size": 2241432, "dataset_size": 64576528}}
|
2023-05-16T21:37:12+00:00
|
0b4a734b40cf567b9c47364df7b8a997b185271c
|
mehmetadventlabs/secim2023-cb-partial
|
[
"license:mit",
"region:us"
] |
2023-05-16T21:52:39+00:00
|
{"license": "mit"}
|
2023-05-16T21:53:50+00:00
|
|
ea0c019d8a185de471fc57186cc096d0cee0536c
|
# Dataset Card for "covidqa_copy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Sree1994/covidqa_copy
|
[
"region:us"
] |
2023-05-16T22:02:22+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "question", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48653509, "num_examples": 1417}, {"name": "test", "num_bytes": 11608421, "num_examples": 375}, {"name": "validation", "num_bytes": 4314598, "num_examples": 203}], "download_size": 2241429, "dataset_size": 64576528}}
|
2023-05-16T22:02:26+00:00
|
4fe7f034f1dc51027df47de4269433e6bb696836
|
25-jiHiorns/madeby-Hio
|
[
"license:openrail",
"region:us"
] |
2023-05-16T22:13:51+00:00
|
{"license": "openrail"}
|
2023-05-16T22:13:51+00:00
|
|
ac9a592ae4134885ec6c8cab34a4056b5648c1d3
|
# Dataset Card for "randomized_clean_miniwob_episodes__image0_5000_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LucasThil/randomized_clean_miniwob_episodes__image0_5000_v2
|
[
"region:us"
] |
2023-05-16T22:18:52+00:00
|
{"dataset_info": {"features": [{"name": "task_name", "dtype": "string"}, {"name": "utterance", "dtype": "string"}, {"name": "reward", "dtype": "float64"}, {"name": "raw_reward", "dtype": "float64"}, {"name": "processed_states", "dtype": "string"}, {"name": "img", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10670313662, "num_examples": 5000}], "download_size": 953450537, "dataset_size": 10670313662}}
|
2023-05-16T22:21:35+00:00
|
6c5f52cd2f64044f3d210242e6a30d6db2738fe8
|
# My own wikipedia dataset
This dataset was generated using this project I published on github : [ThomasBaruzier/wikipedia-dataset](https://github.com/ThomasBaruzier/wikipedia-dataset)
### Files
For now, I uploaded one file : `dataset-50.json`
Every article in this dataset have at least 50 views per hour, based on an average of 10 Wikimedia pageview files, between 2022 and 2023.
|
ThomasBaruzier/wikipedia
|
[
"license:mit",
"region:us"
] |
2023-05-16T22:26:06+00:00
|
{"license": ["mit"]}
|
2023-05-16T22:48:18+00:00
|
eac9e108658d64f929f4b88d1ddb78a08623cd4e
|
HEY YOU!
YES YOU!
ARE YOU AN ASPIRING LLM MODEL CREATOR!
DO YOU WANT SOMETHING LIKE "the pile" DATASET BUT WORSE IN EVERY ASPECT???
WELL YOU'RE IN LUCK!
|
takeraparterer/stinky-heap
|
[
"region:us"
] |
2023-05-16T22:53:01+00:00
|
{}
|
2023-05-16T22:54:33+00:00
|
425420e4b0620ac1ca642f6ea9d2936d80d63907
|
# Dataset Card for xOA22 - Multilingual Prompts from OpenAssistant
### Dataset Summary
xOA22 consists of 22 prompts originally shown in Appendix E, page 25 of the [OpenAssistant Conversations paper](https://arxiv.org/pdf/2304.07327.pdf). These 22 prompts were then manually translated by volunteers into 5 languages: Arabic, Simplified Chinese, French, Hindi and Spanish.
These prompts were originally created for human evaluations of the multilingual abilities of [BLOOMChat](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1). Since not all prompts could be directly translatable due to cultural and linguistic differences, volunteers were encouraged to make appropriate substitutions and modifications that would maintain the intent of the original English prompt. As this was largely a collaborative, volunteer-led effort, this led to some discrepancies in the number of prompts per language. We make note of major departures from the original English prompt below.
### Languages
- Arabic (ar)
- English (en)
- Spanish (es)
- French (fr)
- Hindi (hi)
- Chinese (zh)
## Dataset Structure
### Data Fields
- `prompt`: manually translated prompt text. The English split is un-modified from the OpenAssistant Converstaions paper.
### Data Splits
The xOA22 dataset has 6 splits, one for each language. Below are the statistics for each split
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| ar | 22 |
| en | 22 |
| es | 22 |
| fr | 22 |
| hi | 24 |
| zh | 24 |
### Translation Notes
Below are notes from volunteer translators. Note that the Hindi split does not include Prompt 4 in the English split.
- Arabic
- Prompt 12: Second part of the sentence was translated to “Please mainly limit the games to ones that can be played on a PC at home”, discarding the mentions of emulation.
- Prompt 19: Not sure how to translate "navigation system" to Arabic, I used Google Translate for this one.
- Spanish
- Prompt 12: IDK how to say crafting system in Spanish. I've always said crafteo.
- Prompt 21: Not sure how to translate niche, went for "developing a topic" instead
- Prompt 22: Hacking - hackeo? It's what I'd say colloquially in Spanish, but not sure if it's the right thing to use here
- French
- No translation notes
- Hindi
- Prompt 1: Replace "GLaDOS" with "Ravan", a famous antagonist from mythology
- Prompt 4: This prompt was left untranslated, and so **is missing from the Hindi split**. Translator reasons are as follows: generally people won't ask this in Hindi. Code writing community is generally English aware and they are most probably going to ask this question in English.
- Prompt 9: Corresponds to English Prompt 10. Specific names changed to well known persons in Hindi speaking world.
- Prompt 11: Corresponds to English Prompt 12. I removed "in depth crafting system", "directly or through emulation"
- Prompt 21: Corresponds to English Prompt 22. I removed "social security numbers", "Google", and "Apple"
- Prompt 22: This is a Hindi-specific prompt. The English translation is: "write me a poem on monsoon in 100 words"
- Prompt 23: This is a Hindi-specific prompt. The English translation is: "write me a recipe for butter chicken"
- Prompt 24: This is a Hindi-specific prompt. The English translation is: "How do I go from Delhi to Jaipur? Bus or car? Details please."
- Chinese
- Prompt 1: Changed GLaDOS to a fictional species from the Chinese sci-fi series The Three Body Problem
- Prompt 3: Didn't specify whether it's an imaginary world or real world to make it more natural in Chinese. The meaning is basically the same.
- Prompt 5: Animal rennet -> 人工奶酪 as the former is not familiar to most people
- Prompt 9: Translated "king" as "emperor" to align with Chinese history
- Prompt 10: Joe Biden & Joe Rogan -> 毛晓彤 & 光晓彤
- Prompt 11: Shakespeare -> 鲁迅
- Prompt 12: "sci-fi ships" -> starship (巨型星际飞船)
- Prompt 21: YouTube -> b站
- Prompt 22: social security number -> 身份证
- Prompt 23: This is a Chinese-specific prompt. The English translation is: "Explain Kubernetes in simple terms. Explain to me like I'm 11 years old."
- Prompt 24: This is a Chinese-specific prompt. The English translation is: "I will provide you with an argument or opinion of mine. I want you to criticize it as if you were Elon Musk".
- Translator note: I don't think there is a good counterpart entrepreneur like Elon Musk in China. Jack Ma is as wealthy and powerful as Elon Musk but they have quite different perspectives. So instead of finding an actual counterpart in China, we need to understand the characteristics of Elon Musk and translate accordingly.
### Curation Rationale
These prompts were originally curated in order to test the multilingual abilities of the BLOOMChat model. The model's responses to these translated prompts were compared to responses from other open-source chat models in a human evaluation study. Therefore, emphasis was placed on making translations as natural and understandable as possible to native speakers in order to emulate a chat setting, and we accepted feedback and modifications to the prompts from our volunteers.
### Dataset Curators
### Contributions
### Source Data
Appendix E, page 25 of ["OpenAssistant Conversations - Democratizing Large Language Model Alignment"](https://arxiv.org/pdf/2304.07327.pdf)
|
sambanovasystems/xOA22
|
[
"task_categories:conversational",
"size_categories:n<1K",
"language:ar",
"language:zh",
"language:en",
"language:fr",
"language:hi",
"language:es",
"license:apache-2.0",
"arxiv:2304.07327",
"region:us"
] |
2023-05-16T23:11:37+00:00
|
{"language": ["ar", "zh", "en", "fr", "hi", "es"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["conversational"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "ar", "num_bytes": 2783, "num_examples": 22}, {"name": "en", "num_bytes": 2239, "num_examples": 22}, {"name": "es", "num_bytes": 2361, "num_examples": 22}, {"name": "fr", "num_bytes": 2685, "num_examples": 22}, {"name": "hi", "num_bytes": 5373, "num_examples": 24}, {"name": "zh", "num_bytes": 2111, "num_examples": 24}], "download_size": 21140, "dataset_size": 17552}}
|
2023-05-17T17:38:13+00:00
|
e6847209647855e5566bb78d43e06979d4846007
|
Dataset capturing info for each video feature instance on the youtube homepage, 2005-2010, based on scraped wayback snapshots.
|
tonarie/Wayback-Data-Youtube-Homepage-Videos
|
[
"license:mit",
"region:us"
] |
2023-05-16T23:55:33+00:00
|
{"license": "mit"}
|
2023-05-17T00:03:29+00:00
|
778863b6e303b5a4af210b5dbbb25b6956b5b06a
|
# Instruct Augmented Datasets
This dataset takes various other multiple choice, summarization, etc datasets and augments them to be instruct finetuned.
|
winglian/evals
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] |
2023-05-17T00:41:17+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "question-answering"]}
|
2023-06-17T17:50:47+00:00
|
31885c889616cbfdfbf8de0d6c6e37378ed7a7f2
|
Please cite the use of these data sets: Melton, C. A., White, B. M., Davis, R. L., Bednarczyk, R. A., & Shaban-Nejad, A. (2022). Fine-tuned sentiment analysis of covid-19 vaccine–related social media data: Comparative study. Journal of Medical Internet Research, 24(10), e40408.
|
NoCaptain/COVID_Twitter_Reddit_Jan2020_Feb2022
|
[
"region:us"
] |
2023-05-17T00:56:35+00:00
|
{}
|
2023-05-17T01:35:18+00:00
|
33ecfd8de839448adb5d475b222dda25408ed1e5
|
# Dataset Card for MegaWika
## Dataset Description
- **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Paper:** [Coming soon]
- **Leaderboard:** [Coming soon]
- **Point of Contact:** [Samuel Barham]([email protected])
### Dataset Summary
MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
non-English language, an automated English translation is provided. Furthermore, nearly 130 million English question/answer pairs were extracted from the
passages, and FrameNet events occurring in the passages are detected using the [LOME](https://aclanthology.org/2021.eacl-demos.19.pdf) FrameNet parser.
<!---
To get a feel for the dataset -- its structure, content, strengths and weaknesses -- you may visit the [dataset viewer](https://huggingface.co/spaces/hltcoe/megawika)
we have set up as a HuggingFace Space. It allows the curious visitor to explore a small set of examples spread across a number of the dataset's constituent languages.
-->
### Dataset Creation
The pipeline through which MegaWika was created is complex, and is described in more detail in the paper (linked above),
but the following diagram illustrates the basic approach.

### Supported Tasks and Leaderboards
MegaWika is meant to support research across a variety of tasks, including report generation, summarization, information retrieval, question answering, etc.
### Languages
MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code:
- `af`: Afrikaans
- `ar`: Arabic
- `az`: Azeri (Azerbaijani)
- `bn`: Bengali
- `cs`: Czech
- `de`: German (Deutsch)
- `en`: English
- `es`: Spanish (Español)
- `et`: Estonian
- `fa`: Farsi (Persian)
- `fi`: Finnish
- `fr`: French
- `ga`: Irish (Gaelic)
- `gl`: Galician
- `gu`: Gujarati
- `he`: Hebrew
- `hi`: Hindi
- `hr`: Hungarian
- `id`: Indonesian
- `it`: Italian
- `ja`: Japanese
- `ka`: Georgian (Kartvelian/Kartlian)
- `kk`: Kazakh
- `km`: Khmer
- `ko`: Korean
- `lt`: Lithuanian
- `lv`: Latvian
- `mk`: Macedonian (Makedonski)
- `ml`: Malay (Malayalam)
- `mn`: Mongolian
- `mr`: Marathi
- `my`: Burmese (Myanmar language)
- `ne`: Nepali
- `nl`: Dutch (Nederlands)
- `pl`: Polish
- `ps`: Pashto
- `pt`: Portuguese
- `ro`: Romanian
- `ru`: Russian
- `si`: Sinhalese (Sri Lankan language)
- `sl`: Slovenian
- `sv`: Swedish (Svenska)
- `ta`: Tamil
- `th`: Thai
- `tr`: Turkish
- `uk`: Ukrainian
- `ur`: Urdu
- `vi`: Vietnamese
- `xh`: Xhosa
- `zh`: Chinese (Zhōng wén)
## Dataset Structure
The dataset is divided by language, and the data for each of the 50 languages is further chunked into discrete JSON lines files.
Each line of these files -- we'll call such a line an **instance** -- contains the data extracted from a single Wikipedia article.
### Data Instances
Each instance contains the text of the seed Wikipedia article, along with a list of **entries**. Each entry consists basically in
an extracted Wikipedia passage, the URL and scraped text of the web source it cites, a list of questions/answer pairs extracted from the passage,
and a framenet parse of the passage. Where the passage is from a non-English Wikipedia, a machine translation into English is also provided.
### Data Fields
The detailed structure of an instance is as follows:
```
{
"article_title": <string : title of original Wikipedia article>
"article_text": <string : text of Wikipedia article>
"entries": [
# Wiki Passage
"id": <string : passage ID>
"passage": {
"text": <string : text of passage in English (possibly via MT)>
"parse": <list of dict : FrameNet parse of English passage text>
"en_tokens": <dict : tokenization of passage in English>
"lang_tokens": <dict : tokenization of original non-English passage>
"en_lang_token_map": <dict : alignment mapping between English and original language token indices>
}
# MT
"original": <string : original language passage>
"original_sents": <list of string : sentencized original language passage>
"translation": <string : machine translation of passage>
"translation_sents": <list of string : sentencized machine translation of passage>
"translation_probs": <list of float : log prob of machine translation by sentence, where available>
"repetitious_translation": <string \in ("true", "false") : automated judgment on whether machine translation is pathologically repetitious>
"source_lang": <string : language ID, 2-character ISO code>
# Source
"source_url": <string : URL of the cited web source>
"source_text": <string : content extracted from the scrape of the source URL>
# Question/Answer Pairs
"qa_pairs": [
...
{
"question": <string : generated question>
"passage_id": <string : passage ID>
"en_answer": <string : English answer>
"lang_answer": <string : aligned original language answer>
"frames": [
...
{
"frame": <string : frame triggered by the question>
"argument": <string : detected frame arguments>
}
...
]
# NB: answer matches can be empty, in the case no matching span exists
"en_matches_in_source": <list of int : start and end index of the English language-answer token(s) in the source document>
"en_match_in_passage": <list of int : start and end index of the English language-answer token(s) in the English language translation of the passage>
"lang_matches_in_source": <list of int : start and end index of the original language-answer token(s) in the source document>
"lang_match_in_passage": <list of int : start and end index of the original language-answer token(s) in the original language passage>
"passage": <list of string : sentencized view of the passage>
"en_answer_tokens": <list of string>
"match_disambiguated_question": <string : disambiguated version of question obtained by matching pronouns with article title (noisy but often helpful)>
}
...
]
]
}
```
English language instances differ not in structure but in content;
1. Fields in the block labeled "MT" above are naturally null (that is, they are set to falsy values in Python -- specifically `None`)
2. Since the Wiki passage only exists in English, and has no corresponding non-English "original language" version, answer spans also necessarily have only an English-language version (and no non-English "original-language" version. Therefore, fields in the `qa_pairs` block beginning with `lang_` are set to null/falsy values in Python (in this case, empty lists).
### Data Splits
MegaWika is currently split only by language, as each task will imply its own approach to filtering, sampling, downselecting, and splitting into train/test splits.
<!---
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
-->
## Licensing and Takedown
MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
We release this dataset and all its contents under CC-BY-SA-4.0.
### Notice and Takedown Policy:
*NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact the authors.
*Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
## Additional Information
### Dataset Curators
Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
You can contact one the MegaWika authors, including [Samuel Barham](mailto:[email protected]), [Orion Weller](mailto:[email protected]),
and [Ben van Durme](mailto:[email protected]) with questions.
### Licensing Information
Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
```
@misc{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
year={2023},
eprint={2307.07049},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
### Contributions
[More Information Needed]
-->
|
hltcoe/megawika
|
[
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:af",
"language:ar",
"language:az",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:et",
"language:fa",
"language:fi",
"language:fr",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:id",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:ko",
"language:lt",
"language:lv",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:my",
"language:ne",
"language:nl",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:si",
"language:sl",
"language:sv",
"language:ta",
"language:th",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:xh",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2307.07049",
"region:us"
] |
2023-05-17T01:07:50+00:00
|
{"language": ["af", "ar", "az", "bn", "cs", "de", "en", "es", "et", "fa", "fi", "fr", "ga", "gl", "gu", "he", "hi", "hr", "id", "it", "ja", "ka", "kk", "km", "ko", "lt", "lv", "mk", "ml", "mn", "mr", "my", "ne", "nl", "pl", "ps", "pt", "ro", "ru", "si", "sl", "sv", "ta", "th", "tr", "uk", "ur", "vi", "xh", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["10M<n<100M"], "task_categories": ["summarization", "question-answering", "text-generation", "text2text-generation"], "pretty_name": "MegaWika"}
|
2023-10-03T16:24:24+00:00
|
272f8462529cb7533ea5734d2488a0bdcccb689d
|
# Dataset Card for xOA22 - Multilingual Prompts from OpenAssistant
### Dataset Summary
x-self-instruct-seed-32 consists of 32 prompts chosen out of the 252 prompts in the [self-instruct-seed](https://huggingface.co/datasets/HuggingFaceH4/self-instruct-seed) dataset from the [Self-Instruct](https://arxiv.org/pdf/2212.10560.pdf) paper. These 32 prompts were filtered out according to the following criteria:
- Should be natural in a chat setting
- Therefore, we filter out any prompts with "few-shot examples", as these are all instruction prompts that we consider unnatural in a chat setting
- Should be well-written and easily understood
- Our intention is to use the prompts as-is, without modification, in order to maintain parity with any other experiments that use this dataset
- However, we planned to translate the prompts into multiple languages, and poorly written or confusing prompts could lead to high variance in the resulting translations
- Avoid asking for code / domain specific languages
- Responses in code or domain specific languages defeat the purpose of multilingual evaluation
- Avoid potentially simple numerical responses
- These responses would likely be the same in every language and aren't good measures of multilingual ability
- Avoid requests for translation
- A good response will always be in the same language, so these prompts defeat the purpose of translating prompts into multiple languages
- Avoid prompts that may be difficult to translate / use English-specific language constructs
- Prompts that rely on English constructs such as puns, dad jokes, or witty proverbs may not translate well to other languages
- Some concepts or pop culture references may be culture-specific and difficult to translate to other languages, e.g. knowledge about American celebrities
- Avoid duplicate prompts / prompts that are too similar
The prompts were then manually translated by volunteers into 5 languages: Arabic, Simplified Chinese, French, Hindi and Spanish.
This dataset was originally curated for use in human evaluations of the multilingual abilities of [BLOOMChat](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1). Since not all prompts could be directly translatable due to cultural and linguistic differences, volunteers were encouraged to make appropriate substitutions and modifications that would maintain the intent of the original English prompt. We make note of any major departures from the original English prompts below.
### Languages
- Arabic (ar)
- English (en)
- Spanish (es)
- French (fr)
- Hindi (hi)
- Chinese (zh)
## Dataset Structure
### Data Fields
- `prompt`: manually translated prompt text. The English split is un-modified from the OpenAssistant Converstaions paper.
### Data Splits
The x-self-instruct-seed-32 dataset has 6 splits, one for each language. Below are the statistics for each split
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| ar | 32 |
| en | 32 |
| es | 32 |
| fr | 32 |
| hi | 32 |
| zh | 32 |
### Translation Notes
Below are notes from volunteer translators.
- Arabic
- No translation notes
- Spanish
- No translation notes
- French
- Prompt 20: Not sure how to directly translate “come up with” in this context. Using “provide” instead.
- Hindi
- Prompt 12: Changed Seattle city to a famous destination in India called Manali
- Chinese
- Prompt 20: Drop funny answer to make it more natural in Chinese
- Prompt 24: Rephrase a little bit to be “use humor to overcome embarrassment” to make it more natural in Chinese
### Curation Rationale
These prompts were originally curated in order to test the multilingual abilities of the BLOOMChat model. The model's responses to these translated prompts were rated on their quality in a chat setting. Therefore, emphasis was placed on making translations as natural and understandable as possible to native speakers, and we accepted feedback and modifications to the prompts from our volunteers.
### Dataset Curators
TBA
### Contributions
TBA
### Source Data
https://huggingface.co/datasets/HuggingFaceH4/self-instruct-seed
|
sambanovasystems/x-self-instruct-seed-32
|
[
"task_categories:conversational",
"size_categories:n<1K",
"language:ar",
"language:es",
"language:en",
"language:hi",
"language:fr",
"language:zh",
"license:apache-2.0",
"arxiv:2212.10560",
"region:us"
] |
2023-05-17T01:26:49+00:00
|
{"language": ["ar", "es", "en", "hi", "fr", "zh"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["conversational"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "ar", "num_bytes": 3010, "num_examples": 32}, {"name": "en", "num_bytes": 2145, "num_examples": 32}, {"name": "es", "num_bytes": 2474, "num_examples": 32}, {"name": "fr", "num_bytes": 2493, "num_examples": 32}, {"name": "hi", "num_bytes": 5114, "num_examples": 32}, {"name": "zh", "num_bytes": 1910, "num_examples": 32}], "download_size": 18710, "dataset_size": 17146}}
|
2023-05-17T17:38:27+00:00
|
b514b9531acad249fef0052321b98dea2e9bcb50
|
# Dataset Card for huatuo26M-testdatasets
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/Huatuo-26M**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We are pleased to announce the release of our evaluation dataset, a subset of the Huatuo-26M. This dataset contains 6,000 entries that we used for Natural Language Generation (NLG) experimentation in our associated research paper.
We encourage researchers and developers to use this evaluation dataset to gauge the performance of their own models. This is not only a chance to assess the accuracy and relevancy of generated responses but also an opportunity to investigate their model's proficiency in understanding and generating complex medical language.
Note: All the data points have been anonymized to protect patient privacy, and they adhere strictly to data protection and privacy regulations.
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
FreedomIntelligence/huatuo26M-testdatasets
|
[
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] |
2023-05-17T01:31:23+00:00
|
{"language": ["zh"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "tags": ["medical"]}
|
2023-05-17T02:39:41+00:00
|
c54a95d9b9d62c891a03bd5da60715df7176b097
|
# Dataset Card for Aksharantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/AI4Bharat/IndicLID
- **Paper:** [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Bhasha-Abhijnaanam is a language identification test set for native-script as well as Romanized text which spans 22 Indic languages.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
| <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> |
| -------------- | -------------- | -------------- | --------------- | -------------- | ------------- |
| Assamese (asm) | Hindi (hin) | Maithili (mai) | Nepali (nep) | Sanskrit (san) | Tamil (tam) |
| Bengali (ben) | Kannada (kan) | Malayalam (mal)| Oriya (ori) | Santali (sat) | Telugu (tel) |
| Bodo(brx) | Kashmiri (kas) | Manipuri (mni) | Punjabi (pan) | Sindhi (snd) | Urdu (urd) |
| Gujarati (guj) | Konkani (kok) | Marathi (mar)
## Dataset Structure
### Data Instances
```
A random sample from Hindi (hin) Test dataset.
{
"unique_identifier": "hin1",
"native sentence": "",
"romanized sentence": "",
"language": "Hindi",
"script": "Devanagari",
"source": "Dakshina",
}
```
### Data Fields
- `unique_identifier` (string): 3-letter language code followed by a unique number in Test set.
- `native sentence` (string): A sentence in Indic language.
- `romanized sentence` (string): Transliteration of native sentence in English (Romanized sentence).
- `language` (string): Language of native sentence.
- `script` (string): Script in which native sentence is written.
- `source` (string): Source of the data.
For created data sources, depending on the destination/sampling method of a pair in a language, it will be one of:
- Dakshina Dataset
- Flores-200
- Manually Romanized
- Manually generated
### Data Splits
| Subset | asm | ben | brx | guj | hin | kan | kas (Perso-Arabic) | kas (Devanagari) | kok | mai | mal | mni (Bengali) | mni (Meetei Mayek) | mar | nep | ori | pan | san | sid | tam | tel | urd |
|:------:|:---:|:---:|:---:|:---:|:---:|:---:|:------------------:|:----------------:|:---:|:---:|:---:|:-------------:|:------------------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Native | 1012 | 5606 | 1500 | 5797 | 5617 | 5859 | 2511 | 1012 | 1500 | 2512 | 5628 | 1012 | 1500 | 5611 | 2512 | 1012 | 5776 | 2510 | 2512 | 5893 | 5779 | 5751 | 6883 |
| Romanized | 512 | 4595 | 433 | 4785 | 4606 | 4848 | 450 | 0 | 444 | 439 | 4617 | 0 | 442 | 4603 | 423 | 512 | 4765 | 448 | 0 | 4881 | 4767 | 4741 | 4371 |
## Dataset Creation
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the source language producers?
[More Information Needed]
### Annotations
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
#### Who are the annotators?
Information in the paper. [Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages](https://arxiv.org/abs/2305.15814)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
This data is released under the following licensing scheme:
- Manually collected data: Released under CC0 license.
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of manually collected data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Aksharantar</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
```
@misc{madhani2023bhashaabhijnaanam,
title={Bhasha-Abhijnaanam: Native-script and romanized Language Identification for 22 Indic languages},
author={Yash Madhani and Mitesh M. Khapra and Anoop Kunchukuttan},
year={2023},
eprint={2305.15814},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
---
|
ai4bharat/Bhasha-Abhijnaanam
|
[
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:original",
"language:asm",
"language:ben",
"language:brx",
"language:guj",
"language:hin",
"language:kan",
"language:kas",
"language:kok",
"language:mai",
"language:mal",
"language:mar",
"language:mni",
"language:nep",
"language:ori",
"language:pan",
"language:san",
"language:sat",
"language:sid",
"language:snd",
"language:tam",
"language:tel",
"language:urd",
"license:cc0-1.0",
"arxiv:2305.15814",
"region:us"
] |
2023-05-17T03:43:57+00:00
|
{"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated", "machine-generated", "found", "other"], "language": ["asm", "ben", "brx", "guj", "hin", "kan", "kas", "kok", "mai", "mal", "mar", "mni", "nep", "ori", "pan", "san", "sat", "sid", "snd", "tam", "tel", "urd"], "license": "cc0-1.0", "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "Bhasha-Abhijnaanam"}
|
2023-06-22T07:01:44+00:00
|
180d028d248bd522c8a78bf4c3de7849c3d54862
|
shrusti333/konkani_translation
|
[
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] |
2023-05-17T04:02:17+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "pretty_name": "sign language to konkani translation"}
|
2023-05-17T04:05:39+00:00
|
|
94831b6e20fe0d84fb9864bb632944f50f552b86
|
# Dataset Card for "hh-lmgym-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/hh-lmgym-demo
|
[
"region:us"
] |
2023-05-17T04:09:49+00:00
|
{"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 126803175, "num_examples": 112052}, {"name": "test", "num_bytes": 14079595, "num_examples": 12451}], "download_size": 0, "dataset_size": 140882770}}
|
2023-05-17T17:13:29+00:00
|
1f99ce063db3baf2e01d11b5860fba293052322b
|
# Dataset Card for "chart_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aravind-selvam/chart_data
|
[
"region:us"
] |
2023-05-17T04:17:52+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81554466.0, "num_examples": 4000}, {"name": "validation", "num_bytes": 15626847.0, "num_examples": 1000}], "download_size": 96272795, "dataset_size": 97181313.0}}
|
2023-05-17T04:17:57+00:00
|
d3dc2cec49f398634842d673608ef529e73a56c2
|
# Dataset Card for "ragas-webgpt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
explodinggradients/ragas-webgpt
|
[
"region:us"
] |
2023-05-17T04:47:50+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "ground_truth", "dtype": "string"}, {"name": "generated_text", "dtype": "string"}, {"name": "rephrased_ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1047926, "num_examples": 500}], "download_size": 661466, "dataset_size": 1047926}}
|
2023-05-18T09:38:13+00:00
|
dbc1b0896e8ba75e9a6f718007c457a9bb0d2983
|
flan_t5_qna
|
dongwoojung/flan_t5_qna
|
[
"region:us"
] |
2023-05-17T04:58:15+00:00
|
{}
|
2023-05-19T07:58:51+00:00
|
fa5658d9fc3d7445401af883f9b3b98954107efe
|
# Dataset Card for "alpaca-gpt4-cleaned"
This dataset contains Ukrainian Instruction-Following translated by facebook/nllb-200-3.3B
The dataset was originaly shared in this repository: https://github.com/tloen/alpaca-lora
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
|
nikes64/ualpaca-gpt4
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:uk",
"license:cc-by-4.0",
"gpt4",
"alpaca",
"instruction-finetuning",
"region:us"
] |
2023-05-17T05:16:34+00:00
|
{"language": ["uk"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "question-answering"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, "tags": ["gpt4", "alpaca", "instruction-finetuning"]}
|
2023-05-17T05:56:01+00:00
|
eba3f1d667e520f2b267e0d436133addb8c1fea5
|
insly/language
|
[
"language:et",
"language:en",
"license:unknown",
"region:us"
] |
2023-05-17T05:19:34+00:00
|
{"language": ["et", "en"], "license": "unknown", "pretty_name": "insly"}
|
2023-06-05T16:55:36+00:00
|
|
59edb92ae4be661d97353a4c7d222072d59ae7f0
|
# Functional Test Cases
This is a _very_ small list of functional test cases that a team of software testers (QA) created for an example mobile app called Boop.
## Dataset
* Name: `Boop Test Cases.csv`
* Number of Rows: `136`
* Columns: `11`
* `Test ID` (int)
* `Summary` (string)
* `Idea` (string)
* `Preconditions` (string)
* `Steps to reproduce` (string)
* `Expected Result` (string)
* `Actual Result` (string)
* `Pass/Fail` (string)
* `Bug #` (string)
* `Author` (string)
* `Area` (string)
> 💡 There are missing values. For example, not every test case had a related Bug
## Use Cases
Two common problems in Software Testing are:
* Duplicate test cases (and bug reports)
* Assigning issues to the correct team quickly (from internal sources, Customer or Tech Support, etc)
This dataset is probably too small to create an "Auto-Assigner" tool -- especially because almost half the tests are focused in the `Account` Area.
However, with embeddings, we could see if a new Test Case already exists by checking similarity 🤔
|
CarlosKidman/test-cases
|
[
"size_categories:n<1K",
"language:en",
"license:mit",
"testing",
"region:us"
] |
2023-05-17T05:27:31+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "tags": ["testing"]}
|
2023-05-17T19:20:41+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.