sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
eb342671ce8e5abdbb856c2c06520ff7c3af50fa |
# Dataset Card for "lmqg/qag_jaquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the JAQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Japanese (ja)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""Nerdilinga"は898年にカロリング朝の王領として初めて文献に記録されている。レーゲンスブルク司教の統治下でネルトリンゲンは市場町に成長していった。1215年にネルトリンゲンは皇帝フリードリヒ2世から都市権を与えられ、帝国自由都市となった。この年に最初の市壁が築かれた。その縄張りは現在も街の地図に見て取れる。1219年、ネルトリンゲンの聖霊降臨祭についての最も古い文献上の記録が遺されている。重要な交易路が交差するこの都市は穀物、家畜、織物、毛皮、金属製品の主要な集散地に発展していった。ネルトリンゲンはフランクフルトと並ぶドイツで最も重要な遠距離交易都市の一つとなったのである。",
"questions": [ "1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。", "\"Nerdilinga\"の最初の記録は何年のものですか。" ],
"answers": [ "皇帝フリードリヒ2世", "898年" ],
"questions_answers": "question: 1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。, answer: 皇帝フリードリヒ2世 | question: "Nerdilinga"の最初の記録は何年のものですか。, answer: 898年"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9508| 1431 | 3050|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_jaquad | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_jaquad",
"language:ja",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-18T07:05:33+00:00 | {"language": "ja", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "lmqg/qg_jaquad", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "SQuAD for question generation", "tags": ["question-generation"]} | 2022-12-18T07:54:08+00:00 |
694439c98de5edcc2bc45eac0dda37b79ed2328e |
# Dataset Card for "lmqg/qag_ruquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the RUQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Russian (ru)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": " Everybody , как и хотела Мадонна, выпускают синглом. При нулевом бюджете на раскрутку фото певицы решают не помещать на обложке, чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы . Everybody поднимается на 3-е место в чарте Hot Dance Club Songs, а потом на 107 место в основном, немного не дотянув до первой сотни Hot 100 журнала Billboard[91]. Менеджмент считает это отличным результатом, учитывая нулевые затраты на пиар, и хочет убедиться, что взлёт Everybody не случаен. По просьбе Мадонны вместо Каминса берут более опытного штатного аранжировщика Warner Bros. Records Регги Лукаса (англ.)русск.. Второй сингл Burning Up тоже достигает в чарте танцевальных хитов 3-го места, повторив успех Everybody . И только после этого Мадонне позволяют арендовать студию для записи первого альбома[91].",
"questions": [ "При каком бюджете на раскрутку фото певицы решают не помещать на обложке ?", "Какой альбом Мадонны выпускают синглом?", "Имя более опытного штатного аранжировщика берут по просьбе Мадонны вместо Каминсаболее ?", "Почему при нулевом бджете фото певицы решают не помещать на обложке ?", "На каое место Everybody поднимается в чарте Hot Dance Club Songs?" ],
"answers": [ "При нулевом", " Everybody ", "Warner Bros", "чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы ", "на 3-е место" ],
"questions_answers": "question: При каком бюджете на раскрутку фото певицы решают не помещать на обложке ?, answer: При нулевом | question: Какой альбом Мадонны выпускают синглом?, answer: Everybody | question: Имя более опытного штатного аранжировщика берут по просьбе Мадонны вместо Каминсаболее ?, answer: Warner Bros | question: Почему при нулевом бджете фото певицы решают не помещать на обложке ?, answer: чтобы не отпугнуть цветную аудиторию якобы негритянской диско-соул-певицы | question: На каое место Everybody поднимается в чарте Hot Dance Club Songs?, answer: на 3-е место"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|10407| 4079 | 4017|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_ruquad | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_ruquad",
"language:ru",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-18T07:05:48+00:00 | {"language": "ru", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "lmqg/qg_ruquad", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "SQuAD for question generation", "tags": ["question-generation"]} | 2022-12-18T07:59:33+00:00 |
7d46cefe86891484d70e0bf81d81a4a0d945d350 |
# Dataset Card for "lmqg/qag_esquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the ESQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Spanish (es)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""4 Minutes" fue lanzado como el primer sencillo del álbum y alcanzó el número tres en el Billboard Hot 100. Fue el 37º hit top-ten de Madonna en la lista, empujando a Madonna más allá de Elvis Presley como el artista con más éxitos entre los diez primeros. En el Reino Unido mantuvo su récord de más sencillos número uno para una artista femenina; "4 Minutes" se convierte en su decimotercera. En el 23 Japan Gold Disc Awards, Madonna recibió su quinto trofeo de Artista del Año de la Recording Industry Association of Japan, la mayor cantidad para cualquier artista. Para promover aún más el álbum, Madonna se embarcó en el Sticky & Sweet Tour; Su primera gran empresa con Live Nation. Con una recaudación de $280 millones, se convirtió en la gira más taquillera de un artista en solitario entonces, superando el récord anterior que Madonna estableció con la gira Confessions Tour; Más tarde fue superado por The Wall Live de Roger Waters. Se amplió al año siguiente, añadiendo nuevas fechas europeas, y después de que terminó, la recaudación total fue de $408 millones.",
"questions": [ "¿Cuál es el nombre de la primera gira con Live Nation?", "4 minutos se convirtió en la canción número uno de Madonna en el Reino Unido.", "¿Cuál sencillo fue lanzado como el primer sencillo del álbum?", "¿Cuánto recaudaron Stick y Sweet Tour?", "Madonna superó a qué artista con más éxitos entre los diez primeros." ],
"answers": [ "Sticky & Sweet Tour", "decimotercera", "\"4 Minute", "$280 millones,", "Elvis Presley" ]
"questions_answers": "question: ¿Cuál es el nombre de la primera gira con Live Nation?, answer: Sticky & Sweet Tour | question: 4 minutos se convirtió en la canción número uno de Madonna en el Reino Unido., answer: decimotercera | question: ¿Cuál sencillo fue lanzado como el primer sencillo del álbum?, answer: "4 Minute | question: ¿Cuánto recaudaron Stick y Sweet Tour?, answer: $280 millones, | question: Madonna superó a qué artista con más éxitos entre los diez primeros., answer: Elvis Presley"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|18829| 2067 | 8234|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_esquad | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_esquad",
"language:es",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-18T07:06:04+00:00 | {"language": "es", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "lmqg/qg_esquad", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "SQuAD for question generation", "tags": ["question-generation"]} | 2022-12-18T08:01:13+00:00 |
91aac1464f954786e828538233088fde1219768a | lo1206/Stable-Diffusion | [
"license:openrail",
"region:us"
]
| 2022-12-18T08:02:02+00:00 | {"license": "openrail"} | 2022-12-18T08:03:13+00:00 |
|
0460b6ccd0fcc44d904e71fd44a5d5cb43ab71e7 |
# Dataset Card for "lmqg/qag_itquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the ITQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Itallian (it)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""4 Minuti" è uscito come primo singolo dell' album e ha raggiunto il terzo posto sulla Billboard Hot 100. E' stato il 37° top-ten di Madonna che ha spinto Madonna oltre Elvis Presley come l' artista con i più top-ten hit. Nel Regno Unito ha mantenuto il suo record per il più numero uno single per una artista femminile;"4 Minuti" diventando il suo tredicesimo. Al 23° Japan Gold Disc Awards, Madonna ha ricevuto il suo quinto trofeo Artista dell' anno dalla Recording Industry Association of Japan, la più importante per qualsiasi artista. Per promuovere ulteriormente l' album, Madonna ha intrapreso il Sticky & Sweet Tour, la sua prima grande avventura con Live Nation. Con un lordo di 280 milioni di dollari, è diventato il tour più incassato di un artista solista, superando il precedente record di Madonna stabilito con il Confessions Tour; è stato poi superato da The Wall Live di Roger Waters. E' stato esteso al prossimo anno, aggiungendo nuove date europee, e dopo la fine, il totale lordo totale era di 408 milioni di dollari.",
"questions": [ "Qual è il nome del primo tour con Live Nation?", "4 minuti è diventato Madonna's che numero uno nel Regno Unito?", "Quanto ha incassato Stick e Sweet Tour?", "Madonna ha superato l' artista con i più alti dieci colpi?" ],
"answers": [ "Sticky & Sweet Tour", "tredicesimo", "280 milioni di dollari,", "Elvis Presley" ],
"questions_answers": "question: Qual è il nome del primo tour con Live Nation?, answer: Sticky & Sweet Tour | question: 4 minuti è diventato Madonna's che numero uno nel Regno Unito?, answer: tredicesimo | question: Quanto ha incassato Stick e Sweet Tour?, answer: 280 milioni di dollari, | question: Madonna ha superato l' artista con i più alti dieci colpi?, answer: Elvis Presley"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|16918 | 6280 | 1988|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_itquad | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_itquad",
"language:it",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-18T08:05:18+00:00 | {"language": "it", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "lmqg/qg_itquad", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "SQuAD for question generation", "tags": ["question-generation"]} | 2022-12-18T08:21:31+00:00 |
937c917e20ec926ab9a0e53bafbbf319c12ecb40 | iwaaaaa/coquito | [
"license:artistic-2.0",
"region:us"
]
| 2022-12-18T08:53:55+00:00 | {"license": "artistic-2.0"} | 2022-12-18T08:54:35+00:00 |
|
51fd75a81f7b31a9fed289ec82c6352980854d50 | # Dataset Card for "twentyquestions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | maximedb/twentyquestions | [
"region:us"
]
| 2022-12-18T09:01:42+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "answer", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1658790, "num_examples": 46566}, {"name": "validation", "num_bytes": 548147, "num_examples": 15403}, {"name": "test", "num_bytes": 603112, "num_examples": 16921}], "download_size": 1723095, "dataset_size": 2810049}} | 2022-12-18T09:02:18+00:00 |
09d86f82f054236847bb2e15cf254195e6845ffc | *Tập dữ liệu này do bởi Shayo Chủ sở hữu thực hiện*
Sử dụng tập dữ liệu này cho tất cả bot | Shaxo/Shayo_Application | [
"license:apache-2.0",
"region:us"
]
| 2022-12-18T09:47:00+00:00 | {"license": "apache-2.0"} | 2022-12-18T12:12:29+00:00 |
019e9040608fffc3168983d87dffcbd831db427f | This is an attempt! | Gprime1977/AnimeGirlz | [
"region:us"
]
| 2022-12-18T11:52:23+00:00 | {} | 2022-12-18T12:43:17+00:00 |
d5f268dfc993cd891cc89c21953a5316bd372c6e | Corky/BalkiaAi | [
"license:other",
"region:us"
]
| 2022-12-18T11:58:14+00:00 | {"license": "other"} | 2022-12-18T11:58:14+00:00 |
|
26a81ce6c3b7308296dd17406f76a463730ed9af | # Dataset Card for "salvadoran-news-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | justinian336/salvadoran-news-ner | [
"region:us"
]
| 2022-12-18T12:11:41+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73983057.36422747, "num_examples": 56025}], "download_size": 43634286, "dataset_size": 73983057.36422747}} | 2022-12-18T12:11:50+00:00 |
ae67baea224991acef61f34fd7a9a958d0e90cc7 | # Dataset Card for "vintage-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Norod78/vintage-blip-captions | [
"region:us"
]
| 2022-12-18T14:52:31+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 593662581.43, "num_examples": 1370}], "download_size": 593666132, "dataset_size": 593662581.43}} | 2022-12-20T16:41:56+00:00 |
1041b4472ece1b0bda924f90ffeea15ab690fc30 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: srcocotero/bert-qa-en
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@abrar06](https://huggingface.co/abrar06) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-0a0542-2518877374 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-18T17:35:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "srcocotero/bert-qa-en", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-18T17:38:00+00:00 |
f30e1bd2f0a51535b493de36d3686cfcfff47abe | # Dataset Card for WebGPT Comparisons
## Dataset Description
In the [WebGPT paper](https://arxiv.org/abs/2112.09332), the authors trained a reward model from human feedback.
They used the reward model to train a long form question answering model to align with human preferences.
This is the dataset of all comparisons that were marked as suitable for reward modeling by the end of the WebGPT project.
There are 19,578 comparisons in total.
Each example in the dataset contains a pair of model answers for a question, and the associated metadata.
Each answer has a preference score from humans that can be used to determine which of the two answers are better.
Overall, an example has the following fields:
* `question`: The text of the question, together with the name of the dataset from which it was taken and a unique ID.
* `quotes_0`: The extracts that the model found while browsing for `answer_0`, together with the title of the page on which the extract was found, constructed from the HTML title and domain name of the page.
* `answer_0`: The final answer that the model composed using `quotes_0`.
* `tokens_0`: The prefix that would have been given to the model in the final step of the episode to create `answer_0`, and the completion given by the model or human. The prefix is made up of the question and the quotes, with some truncation, and the completion is simply the answer. Both are tokenized using the GPT-2 tokenizer. The concatenation of the prefix and completion is the input used for reward modeling.
* `score_0`: The strength of the preference for `answer_0` over `answer_1` as a number from −1 to 1. It sums to 0 with `score_1`, and an answer is preferred if and only if its score is positive. For reward modeling, we treat scores of 0 as soft 50% labels, and all other scores as hard labels (using only their sign).
* `quotes_1`: The counterpart to `quotes_0`.
* `answer_1`: The counterpart to `answer_0`.
* `tokens_1`: The counterpart to `tokens_0`.
* `score_1`: The counterpart to `score_0`.
This information was found in Appendix K of the WebGPT paper.
## Citation Information
[https://arxiv.org/abs/2112.09332](https://arxiv.org/abs/2112.09332)
```
@inproceedings{nakano2021webgpt,
author = {Reiichiro Nakano and Jacob Hilton and Suchir Balaji and Jeff Wu and Long Ouyang and Christina Kim and Christopher Hesse and Shantanu Jain and Vineet Kosaraju and William Saunders and Xu Jiang and Karl Cobbe and Tyna Eloundou and Gretchen Krueger and Kevin Button and Matthew Knight and Benjamin Chess and John Schulman},
title = {WebGPT: Browser-assisted question-answering with human feedback},
booktitle = {arXiv},
year = 2021,
}
```
Dataset added to the Hugging Face Hub by [@Tristan](https://huggingface.co/Tristan) and [@natolambert](https://huggingface.co/natolambert) | openai/webgpt_comparisons | [
"arxiv:2112.09332",
"region:us"
]
| 2022-12-18T19:56:41+00:00 | {"pretty_name": "WebGPT Comparisons"} | 2022-12-19T17:55:29+00:00 |
dd2ede6faaea338ef6b1e2966f06808656975a23 |
# Dataset Card for CSFD movie reviews (Czech)
## Dataset Description
The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>.
Each review contains text, rating, date, and basic information about the movie (or TV series).
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating (from "0/5" to "5/5")
- `rating_int`: integer representation of the rating (from 0 to 5)
- `date`: date of publishing the review (just date, no time nor timezone)
- `comment_language`: language of the review (always "cs")
- `comment`: the string of the review
- `item_title`: title of the reviewed item
- `item_year`: publishing year of the item (string, can also be a range)
- `item_kind`: kind of the item - either "film" or "seriál"
- `item_genres`: list of genres of the item
- `item_directors`: list of director names of the item
- `item_screenwriters`: list of screenwriter names of the item
- `item_cast`: list of actors and actress in the item
## Dataset Source
The data was mined and sampled from the <https://csfd.cz> website.
Make sure to comply with the terms of conditions of the website operator when using the data.
| fewshot-goes-multilingual/cs_csfd-movie-reviews | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:cs",
"license:cc-by-sa-4.0",
"movie reviews",
"rating prediction",
"region:us"
]
| 2022-12-18T20:05:15+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["cs"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "CSFD movie reviews (Czech)", "tags": ["movie reviews", "rating prediction"]} | 2022-12-18T21:30:56+00:00 |
23a20c659d868740ef9c54854de631fe19cd5c17 |
# Dataset Card for CSFD movie reviews (Slovak)
## Dataset Description
The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>.
Each review contains text, rating, date, and basic information about the movie (or TV series).
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating (from "0/5" to "5/5")
- `rating_int`: integer representation of the rating (from 0 to 5)
- `date`: date of publishing the review (just date, no time nor timezone)
- `comment_language`: language of the review (always "sk")
- `comment`: the string of the review
- `item_title`: title of the reviewed item
- `item_year`: publishing year of the item (string, can also be a range)
- `item_kind`: kind of the item - either "film" or "seriál"
- `item_genres`: list of genres of the item
- `item_directors`: list of director names of the item
- `item_screenwriters`: list of screenwriter names of the item
- `item_cast`: list of actors and actress in the item
## Dataset Source
The data was mined and sampled from the <https://csfd.cz> website.
Make sure to comply with the terms of conditions of the website operator when using the data.
| fewshot-goes-multilingual/sk_csfd-movie-reviews | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:sk",
"license:cc-by-sa-4.0",
"movie reviews",
"rating prediction",
"region:us"
]
| 2022-12-18T21:28:17+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["sk"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "CSFD movie reviews (Slovak)", "tags": ["movie reviews", "rating prediction"]} | 2022-12-18T21:30:31+00:00 |
c963bed0544cf090214329c4b2e9fc0c84168115 | # Dataset Card for "twitter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hayleyg/twitter | [
"region:us"
]
| 2022-12-18T21:33:52+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 332000, "num_examples": 2000}, {"name": "test", "num_bytes": 83000, "num_examples": 500}], "download_size": 128444, "dataset_size": 415000}} | 2022-12-18T21:34:22+00:00 |
e490d6a7e54b88291411f00565533043ea3150ca | # Dataset Card for "mbti-cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | echodpp/mbti-cleaned | [
"region:us"
]
| 2022-12-18T22:31:08+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51651122, "num_examples": 327828}, {"name": "test", "num_bytes": 12922409, "num_examples": 81957}], "download_size": 42684526, "dataset_size": 64573531}} | 2022-12-18T22:31:20+00:00 |
7e1fccd916c1a6cff958a94f1f8c3c49226210f2 | # Dataset Card for "yelp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hayleyg/yelp | [
"region:us"
]
| 2022-12-18T22:33:39+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 332000, "num_examples": 2000}, {"name": "test", "num_bytes": 83000, "num_examples": 500}], "download_size": 174280, "dataset_size": 415000}} | 2022-12-18T22:34:02+00:00 |
59c8ab191f9f7140d0e7dad526d4bf9c3ce55e76 | # Dataset Card for "learning-segformer-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nickponline/learning-segformer-dataset | [
"region:us"
]
| 2022-12-19T00:19:48+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 402128.0, "num_examples": 100}], "download_size": 326407, "dataset_size": 402128.0}} | 2022-12-19T00:20:04+00:00 |
dc2ec07406a7c223385f323044a5dc2ad6d0c6b1 | # Quick!Draw! Dataset (per-row bin format)
This is the full 50M-row dataset from [QuickDraw! dataset](https://github.com/googlecreativelab/quickdraw-dataset). The row for each drawing contains a byte-encoded packed representation of the drawing and data, which you can unpack using the following snippet:
```
def unpack_drawing(file_handle):
key_id, = unpack('Q', file_handle.read(8))
country_code, = unpack('2s', file_handle.read(2))
recognized, = unpack('b', file_handle.read(1))
timestamp, = unpack('I', file_handle.read(4))
n_strokes, = unpack('H', file_handle.read(2))
image = []
n_bytes = 17
for i in range(n_strokes):
n_points, = unpack('H', file_handle.read(2))
fmt = str(n_points) + 'B'
x = unpack(fmt, file_handle.read(n_points))
y = unpack(fmt, file_handle.read(n_points))
image.append((x, y))
n_bytes += 2 + 2*n_points
result = {
'key_id': key_id,
'country_code': country_code,
'recognized': recognized,
'timestamp': timestamp,
'image': image,
}
return result
```
The `image` in the above is still in line vector format. To convert render this to a raster image (I recommend you do this on-the-fly in a pre-processor):
```
# packed bin -> RGB PIL
def binToPIL(packed_drawing):
padding = 8
radius = 7
scale = (224.0-(2*padding)) / 256
unpacked = unpack_drawing(io.BytesIO(packed_drawing))
unpacked_image = unpacked['image']
image = np.full((224,224), 255, np.uint8)
for stroke in unpacked['image']:
prevX = round(stroke[0][0]*scale)
prevY = round(stroke[1][0]*scale)
for i in range(1, len(stroke[0])):
x = round(stroke[0][i]*scale)
y = round(stroke[1][i]*scale)
cv2.line(image, (padding+prevX, padding+prevY), (padding+x, padding+y), 0, radius, -1)
prevX = x
prevY = y
pilImage = Image.fromarray(image).convert("RGB")
return pilImage
``` | kmewhort/quickdraw-bins-50M | [
"region:us"
]
| 2022-12-19T03:43:02+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "The Eiffel Tower", "1": "The Great Wall of China", "2": "The Mona Lisa", "3": "aircraft carrier", "4": "airplane", "5": "alarm clock", "6": "ambulance", "7": "angel", "8": "animal migration", "9": "ant", "10": "anvil", "11": "apple", "12": "arm", "13": "asparagus", "14": "axe", "15": "backpack", "16": "banana", "17": "bandage", "18": "barn", "19": "baseball", "20": "baseball bat", "21": "basket", "22": "basketball", "23": "bat", "24": "bathtub", "25": "beach", "26": "bear", "27": "beard", "28": "bed", "29": "bee", "30": "belt", "31": "bench", "32": "bicycle", "33": "binoculars", "34": "bird", "35": "birthday cake", "36": "blackberry", "37": "blueberry", "38": "book", "39": "boomerang", "40": "bottlecap", "41": "bowtie", "42": "bracelet", "43": "brain", "44": "bread", "45": "bridge", "46": "broccoli", "47": "broom", "48": "bucket", "49": "bulldozer", "50": "bus", "51": "bush", "52": "butterfly", "53": "cactus", "54": "cake", "55": "calculator", "56": "calendar", "57": "camel", "58": "camera", "59": "camouflage", "60": "campfire", "61": "candle", "62": "cannon", "63": "canoe", "64": "car", "65": "carrot", "66": "castle", "67": "cat", "68": "ceiling fan", "69": "cell phone", "70": "cello", "71": "chair", "72": "chandelier", "73": "church", "74": "circle", "75": "clarinet", "76": "clock", "77": "cloud", "78": "coffee cup", "79": "compass", "80": "computer", "81": "cookie", "82": "cooler", "83": "couch", "84": "cow", "85": "crab", "86": "crayon", "87": "crocodile", "88": "crown", "89": "cruise ship", "90": "cup", "91": "diamond", "92": "dishwasher", "93": "diving board", "94": "dog", "95": "dolphin", "96": "donut", "97": "door", "98": "dragon", "99": "dresser", "100": "drill", "101": "drums", "102": "duck", "103": "dumbbell", "104": "ear", "105": "elbow", "106": "elephant", "107": "envelope", "108": "eraser", "109": "eye", "110": "eyeglasses", "111": "face", "112": "fan", "113": "feather", "114": "fence", "115": "finger", "116": "fire hydrant", "117": "fireplace", "118": "firetruck", "119": "fish", "120": "flamingo", "121": "flashlight", "122": "flip flops", "123": "floor lamp", "124": "flower", "125": "flying saucer", "126": "foot", "127": "fork", "128": "frog", "129": "frying pan", "130": "garden", "131": "garden hose", "132": "giraffe", "133": "goatee", "134": "golf club", "135": "grapes", "136": "grass", "137": "guitar", "138": "hamburger", "139": "hammer", "140": "hand", "141": "harp", "142": "hat", "143": "headphones", "144": "hedgehog", "145": "helicopter", "146": "helmet", "147": "hexagon", "148": "hockey puck", "149": "hockey stick", "150": "horse", "151": "hospital", "152": "hot air balloon", "153": "hot dog", "154": "hot tub", "155": "hourglass", "156": "house", "157": "house plant", "158": "hurricane", "159": "ice cream", "160": "jacket", "161": "jail", "162": "kangaroo", "163": "key", "164": "keyboard", "165": "knee", "166": "knife", "167": "ladder", "168": "lantern", "169": "laptop", "170": "leaf", "171": "leg", "172": "light bulb", "173": "lighter", "174": "lighthouse", "175": "lightning", "176": "line", "177": "lion", "178": "lipstick", "179": "lobster", "180": "lollipop", "181": "mailbox", "182": "map", "183": "marker", "184": "matches", "185": "megaphone", "186": "mermaid", "187": "microphone", "188": "microwave", "189": "monkey", "190": "moon", "191": "mosquito", "192": "motorbike", "193": "mountain", "194": "mouse", "195": "moustache", "196": "mouth", "197": "mug", "198": "mushroom", "199": "nail", "200": "necklace", "201": "nose", "202": "ocean", "203": "octagon", "204": "octopus", "205": "onion", "206": "oven", "207": "owl", "208": "paint can", "209": "paintbrush", "210": "palm tree", "211": "panda", "212": "pants", "213": "paper clip", "214": "parachute", "215": "parrot", "216": "passport", "217": "peanut", "218": "pear", "219": "peas", "220": "pencil", "221": "penguin", "222": "piano", "223": "pickup truck", "224": "picture frame", "225": "pig", "226": "pillow", "227": "pineapple", "228": "pizza", "229": "pliers", "230": "police car", "231": "pond", "232": "pool", "233": "popsicle", "234": "postcard", "235": "potato", "236": "power outlet", "237": "purse", "238": "rabbit", "239": "raccoon", "240": "radio", "241": "rain", "242": "rainbow", "243": "rake", "244": "remote control", "245": "rhinoceros", "246": "rifle", "247": "river", "248": "roller coaster", "249": "rollerskates", "250": "sailboat", "251": "sandwich", "252": "saw", "253": "saxophone", "254": "school bus", "255": "scissors", "256": "scorpion", "257": "screwdriver", "258": "sea turtle", "259": "see saw", "260": "shark", "261": "sheep", "262": "shoe", "263": "shorts", "264": "shovel", "265": "sink", "266": "skateboard", "267": "skull", "268": "skyscraper", "269": "sleeping bag", "270": "smiley face", "271": "snail", "272": "snake", "273": "snorkel", "274": "snowflake", "275": "snowman", "276": "soccer ball", "277": "sock", "278": "speedboat", "279": "spider", "280": "spoon", "281": "spreadsheet", "282": "square", "283": "squiggle", "284": "squirrel", "285": "stairs", "286": "star", "287": "steak", "288": "stereo", "289": "stethoscope", "290": "stitches", "291": "stop sign", "292": "stove", "293": "strawberry", "294": "streetlight", "295": "string bean", "296": "submarine", "297": "suitcase", "298": "sun", "299": "swan", "300": "sweater", "301": "swing set", "302": "sword", "303": "syringe", "304": "t-shirt", "305": "table", "306": "teapot", "307": "teddy-bear", "308": "telephone", "309": "television", "310": "tennis racquet", "311": "tent", "312": "tiger", "313": "toaster", "314": "toe", "315": "toilet", "316": "tooth", "317": "toothbrush", "318": "toothpaste", "319": "tornado", "320": "tractor", "321": "traffic light", "322": "train", "323": "tree", "324": "triangle", "325": "trombone", "326": "truck", "327": "trumpet", "328": "umbrella", "329": "underwear", "330": "van", "331": "vase", "332": "violin", "333": "washing machine", "334": "watermelon", "335": "waterslide", "336": "whale", "337": "wheel", "338": "windmill", "339": "wine bottle", "340": "wine glass", "341": "wristwatch", "342": "yoga", "343": "zebra", "344": "zigzag"}}}}, {"name": "packed_drawing", "dtype": "binary"}], "splits": [{"name": "train", "num_bytes": 5196066788.157136, "num_examples": 40341012}, {"name": "test", "num_bytes": 1299016825.8428645, "num_examples": 10085254}], "download_size": 6290637578, "dataset_size": 6495083614.0}} | 2022-12-19T18:12:46+00:00 |
9415be17ea63d0b4dc471f8557a0817f3d7cfa9a |
В качестве фактов использовались предложения из Википедии, а в качестве негативных - худлит и новости
Модель обученная на этом датасете [Den4ikAI/ruBert_base_fact_detection](https://huggingface.co/Den4ikAI/ruBert_base_fact_detection)
delimiter='|' | Den4ikAI/fact_detection | [
"language:ru",
"license:mit",
"region:us"
]
| 2022-12-19T03:44:27+00:00 | {"language": ["ru"], "license": "mit"} | 2022-12-19T04:02:52+00:00 |
b614668eb410f35b3acda675116bbe6ae6ccff54 | # Dataset Card for "python_vul_cvefix_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EddieChen372/python_vul_cvefix_small | [
"region:us"
]
| 2022-12-19T05:08:10+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "CWE-22", "1": "CWE-79", "2": "CWE-601"}}}}, {"name": "code_before", "dtype": "string"}, {"name": "code_after", "dtype": "string"}, {"name": "label_text", "dtype": "string"}, {"name": "deleted", "struct": [{"name": "code", "sequence": "string"}, {"name": "line_no", "sequence": "int64"}]}, {"name": "added", "struct": [{"name": "code", "sequence": "string"}, {"name": "line_no", "sequence": "int64"}]}, {"name": "normalized_code_before", "dtype": "string"}, {"name": "normalized_code_after", "dtype": "string"}, {"name": "before_doc_string_pos", "sequence": "int64"}, {"name": "after_doc_string_pos", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 10903264.596273292, "num_examples": 160}, {"name": "test", "num_bytes": 3609181.0, "num_examples": 41}], "download_size": 5041260, "dataset_size": 14512445.596273292}} | 2022-12-19T05:08:41+00:00 |
f8bd83e996e4392dbc0d5f643040e3128f0f9607 | stable diffusion
Azuki
| wheart/aiazuki | [
"license:openrail",
"region:us"
]
| 2022-12-19T07:44:46+00:00 | {"license": "openrail"} | 2022-12-19T07:59:47+00:00 |
ccbdbc3ff8ac98aa73be4581bc3d23f5345f3092 | # Dataset Card for "corgi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/corgi | [
"region:us"
]
| 2022-12-19T08:44:51+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5590698.0, "num_examples": 5}], "download_size": 5591635, "dataset_size": 5590698.0}} | 2022-12-19T08:45:20+00:00 |
0a94cf73e560e3376e14aaefba1e7ad16e6c4e94 |
# Dataset Card for The Stack Metadata
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Changelog](#changelog)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Usage Example](#usage-example)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Terms of Use for The Stack](#terms-of-use-for-the-stack)
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** https://arxiv.org/abs/2211.15533
- **Leaderboard:** N/A
- **Point of Contact:** [email protected]
### Changelog
|Release|Description|
|-|-|
|v1.1| This is the first release of the metadata. It is for The Stack v1.1|
|v1.2| Metadata dataset matching The Stack v1.2|
### Dataset Summary
This is a set of additional information for repositories used for The Stack. It contains file paths, detected licenes as well as some other information for the repositories.
### Supported Tasks and Leaderboards
The main task is to recreate repository structure from the files of The Stack. Also, the set can be used for computing statistics and custom filtering or aggregation operations on The Stack.
## Dataset Structure
### Data Fields

The set is split into buckets by repositories. There are 944 buckets. Additionally to the fields in the image, `ri` contains `min_repo_event_datetime` which is the ealiest date and time of an event for a repo after Jan 1 2015.

As an example of an aggregation operation on The Stack, the image above shows conceptually a selection of stars ( and issues and PR count) for a file. Each unique file can be part of multiple repositories. So, The Stack releases unique files and aggregates meta information (e.g stars) from all repositories it belongs to. For example, for max_stars_count we take the maximum number of stars from all repositories the file is part of.
The meta data will allow you to reconstruct repository directory structures. For this, for each repository form `ri` tabele it is needed to take all its files from `fi` table, find them in The Stack by file's `hexsha` and save those files' content under its path for a repository from `fi` table. For speed it is preferable to index The Stack by hexsha first.
### Usage Example
Restore folder structure for python files in numpy repository
```python
import datasets
from pathlib import Path
from tqdm.auto import tqdm
import pandas as pd
# assuming metadata is cloned into the local folder /data/hf_repos/the-stack-metadata
# the stack is cloned into the local folder /data/hf_repos/the-stack-v1.1
# destination folder is in /repo_workdir/numpy_restored
the_stack_meta_path = Path('/data/hf_repos/the-stack-metadata')
the_stack_path = Path('/data/hf_repos/the-stack-v1.1')
repo_dst_root = Path('/repo_workdir/numpy_restored')
repo_name = 'numpy/numpy'
# Get bucket with numpy repo info
# meta_bucket_path = None
#for fn in tqdm(list((the_stack_meta_path/'data').glob('*/ri.parquet'))):
# df = pd.read_parquet(fn)
# if any(df['name'] == repo_name):
# meta_bucket_path = fn
# break
meta_bucket_path = the_stack_meta_path / 'data/255_944'
# Get repository id from repo name
ri_id = pd.read_parquet(
meta_bucket_path / 'ri.parquet'
).query(
f'`name` == "{repo_name}"'
)['id'].to_list()[0]
# Get files information for the reopository
files_info = pd.read_parquet(
meta_bucket_path / 'fi.parquet'
).query(
f'`ri_id` == {ri_id} and `size` != 0 and `is_deleted` == False'
)
# Convert DF with files information to a dictionary by language and then file hexsha
# there can be more than one file with the same hexsha in the repo so we gather
# all instances per unique hexsha
files_info_dict = {
k: v[['hexsha', 'path']].groupby('hexsha').apply(lambda x: list(x['path'])).to_dict()
for k, v in files_info.groupby('lang_ex')
}
# Load Python part of The Stack
ds = datasets.load_dataset(
str(the_stack_path/'data/python'),
num_proc=10, ignore_verifications=True
)
# Save file content of the python files in the numpy reposirotry in their appropriate locations
def save_file_content(example, files_info_dict, repo_dst_root):
if example['hexsha'] in files_info_dict:
for el in files_info_dict[example['hexsha']]:
path = repo_dst_root / el
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(example['content'])
ds.map(
save_file_content,
fn_kwargs={'files_info_dict': files_info_dict['Python'], 'repo_dst_root': repo_dst_root},
num_proc=10
)
```
## Dataset Creation
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#dataset-creation) in The Stack.
## Considerations for Using the Data
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#considerations-for-using-the-data) in The Stack.
## Additional Information
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#additional-information) in The Stack.
## Terms of Use for The Stack
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) in The Stack. | bigcode/the-stack-metadata | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:other",
"arxiv:2211.15533",
"region:us"
]
| 2022-12-19T09:17:28+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "The-Stack-Metadata", "extra_gated_prompt": "## Terms of Use for The Stack\nThe Stack Metadata is a collection of additional information for and is part of The Stack dataset, - a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:\n1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n2. The Stack is regularly updated to enact validated data removal requests. By clicking on \"Access repository\", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset\u2019s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.\n3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.\n\nBy clicking on \"Access repository\" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.\n ", "extra_gated_fields": {"Email": "text", "I have read the License and agree with its terms": "checkbox"}} | 2023-03-16T13:58:24+00:00 |
c0d623053a40e9ca5c2df358f90ce874644611ef | wooltar/btc-price-history | [
"license:pddl",
"region:us"
]
| 2022-12-19T09:39:35+00:00 | {"license": "pddl"} | 2022-12-19T09:39:36+00:00 |
|
039c62346d9dccddac5ab66b9e168a71e3c678b7 | Glac1er/June | [
"license:unknown",
"region:us"
]
| 2022-12-19T12:21:40+00:00 | {"license": "unknown"} | 2022-12-19T13:21:17+00:00 |
|
08f6acc1dad21b10087cc654d5ac9ef19ed34ecc | # Dataset Card for "news_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phd411r1/news_classification | [
"region:us"
]
| 2022-12-19T12:27:30+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 850939, "num_examples": 7997}, {"name": "test", "num_bytes": 178204, "num_examples": 1669}], "download_size": 551232, "dataset_size": 1029143}} | 2022-12-19T12:27:51+00:00 |
e773ddeb1edfeb8e4f04afc63e10473fd33a0542 | # Dataset Card for "xrays"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/xrays | [
"region:us"
]
| 2022-12-19T13:03:53+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1483315.0, "num_examples": 15}], "download_size": 1483649, "dataset_size": 1483315.0}} | 2022-12-19T13:04:07+00:00 |
7523293e36fa4f11d4fdb40f7ca1e6d9dbf44ef2 | # Dataset Card for "galaxies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/galaxies | [
"region:us"
]
| 2022-12-19T13:12:44+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 193327.0, "num_examples": 15}], "download_size": 193973, "dataset_size": 193327.0}} | 2022-12-19T13:12:57+00:00 |
56c0215d08fb430e30f8425ffc80ec0ad1682e0f |
# Dataset Card for "Hansel"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Citation](#citation)
## Dataset Description
- **Homepage:** https://github.com/HITsz-TMG/Hansel
- **Paper:** https://arxiv.org/abs/2207.13005
Hansel is a high-quality human-annotated Chinese entity linking (EL) dataset, focusing on tail entities and emerging entities:
- The test set contains Few-shot (FS) and zero-shot (ZS) slices, has 10K examples and uses Wikidata as the corresponding knowledge base.
- The training and validation sets are from Wikipedia hyperlinks, useful for large-scale pretraining of Chinese EL systems.
Please see our [WSDM 2023](https://www.wsdm-conference.org/2023/) paper [**"Hansel: A Chinese Few-Shot and Zero-Shot Entity Linking Benchmark"**](https://dl.acm.org/doi/10.1145/3539597.3570418) to learn more about our dataset.
For models in the paper and our processed knowledge base, please see our [Github repository](https://github.com/HITsz-TMG/Hansel).
## Dataset Structure
### Data Instances
{"id": "hansel-eval-zs-1463",
"text": "1905电影网讯 已经筹备了十余年的吉尔莫·德尔·托罗的《匹诺曹》,在上个月顺利被网飞公司买下,成为了流媒体巨头旗下的新片。近日,这部备受关注的影片确定了自己的档期:2021年。虽然具体时间未定,但影片却已经实实在在地向前迈出了一步。",
"start": 29,
"end": 32,
"mention": "匹诺曹",
"gold_id": "Q73895818",
"source": "https://www.1905.com/news/20181107/1325389.shtml",
"domain": "news"
}
### Data Splits
| | # Mentions | # Entities | Domain |
| ---- | ---- | ---- | ---- |
| Train | 9,879,813 | 541,058 | Wikipedia |
| Validation | 9,674 | 6,320 | Wikipedia |
| Hansel-FS | 5,260 | 2,720 | News, Social Media |
| Hansel-ZS | 4,715 | 4,046 | News, Social Media, E-books, etc.|
## Citation
If you find our dataset useful, please cite us.
```bibtex
@inproceedings{xu2022hansel,
author = {Xu, Zhenran and Shan, Zifei and Li, Yuxin and Hu, Baotian and Qin, Bing},
title = {Hansel: A Chinese Few-Shot and Zero-Shot Entity Linking Benchmark},
year = {2023},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3539597.3570418},
booktitle = {Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining},
pages = {832–840}
}
```
| HIT-TMG/Hansel | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2207.13005",
"region:us"
]
| 2022-12-19T13:28:24+00:00 | {"annotations_creators": ["crowdsourced", "found"], "language_creators": ["found", "crowdsourced"], "language": ["zh"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["entity-linking-retrieval"], "paperswithcode_id": "hansel", "pretty_name": "Hansel", "tags": [], "dataset_info": [{"config_name": "wiki", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "start", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "mention", "dtype": "string"}, {"name": "gold_id", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "validation"}]}, {"config_name": "hansel-few-shot", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "start", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "mention", "dtype": "string"}, {"name": "gold_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "domain", "dtype": "string"}], "splits": [{"name": "test"}]}, {"config_name": "hansel-zero-shot", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "start", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "mention", "dtype": "string"}, {"name": "gold_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "domain", "dtype": "string"}], "splits": [{"name": "test"}]}]} | 2023-03-13T11:52:56+00:00 |
15420106f9ebc5a7596d0f5b95f3bfbc5db18219 | ---
pipeline_tag: text-generation
tags:
- code
model-index:
- name: VeriGen
results:
- task:
type: text-generation
dataset:
type:
name:
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
# VeriGen
## Table of Contents
1. [Dataset Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [License](##license)
5. [Citation](##citation)
## Dataset Summary
- The dataset comprises Verilog modules as entries. The entries were retrieved from the GitHub dataset on BigQuery.
- For training [models (https://huggingface.co/shailja/fine-tuned-codegen-2B-Verilog)], we filtered entries with no of characters exceeding 20000 and duplicates (exact duplicates ignoring whitespaces).
- **Paper:** [ Benchmarking Large Language Models for Automated Verilog RTL Code Generation](https://arxiv.org/abs/2212.11140)
- **Point of Contact:** [contact@shailja](mailto:[email protected])
- **Languages:** Verilog (Hardware Description Language)
### Data Splits
The dataset only contains a train split.
### Use
```python
# pip install datasets
from datasets import load_dataset
ds = load_dataset("shailja/Verilog_GitHub", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
```
### Intended Use
The dataset consists of source code from a range of GitHub repositories. As such, they can potentially include non-compilable, low-quality, and vulnerable code.
### Attribution & Other Requirements
The pretraining dataset of the model was not filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected.
# License
The dataset is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@misc{https://doi.org/10.48550/arxiv.2212.11140,
doi = {10.48550/ARXIV.2212.11140},
url = {https://arxiv.org/abs/2212.11140},
author = {Thakur, Shailja and Ahmad, Baleegh and Fan, Zhenxing and Pearce, Hammond and Tan, Benjamin and Karri, Ramesh and Dolan-Gavitt, Brendan and Garg, Siddharth},
title = {Benchmarking Large Language Models for Automated Verilog RTL Code Generation},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` | shailja/Verilog_GitHub | [
"license:mit",
"arxiv:2212.11140",
"region:us"
]
| 2022-12-19T15:19:55+00:00 | {"license": "mit"} | 2023-09-20T16:14:18+00:00 |
1b7a925e3c7cb0ed9d0dd32b08f269f9585040a0 | # Dataset Card for "ps_news_2020_100K-sentences_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ihanif/ps_news_2020_100K-sentences_processed | [
"size_categories:10K<n<100K",
"language:ps",
"region:us"
]
| 2022-12-19T15:32:22+00:00 | {"language": ["ps"], "size_categories": ["10K<n<100K"], "pretty_name": "Pashto News 100K Sentences Cleaned", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20452491, "num_examples": 100000}], "download_size": 10143557, "dataset_size": 20452491}} | 2023-06-16T19:53:34+00:00 |
f1165fe5fee96f2540196dc7e9ab50053de56e91 | # Dataset Card for "hamburgers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/hamburgers | [
"region:us"
]
| 2022-12-19T15:53:09+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 22977927.0, "num_examples": 10}], "download_size": 22973038, "dataset_size": 22977927.0}} | 2022-12-19T15:53:42+00:00 |
0d7a5476f7420b21dc9f4807fdf766c821ad9cb1 | # Dataset Card for "alps"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/alps | [
"region:us"
]
| 2022-12-19T17:30:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 27913166.0, "num_examples": 10}], "download_size": 27914963, "dataset_size": 27913166.0}} | 2022-12-19T17:30:44+00:00 |
09688bd198af1dea3646f3df4a7c75907ae8a15f | # Dataset Card for "turkishReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ozz/turkishReviews-ds-mini | [
"region:us"
]
| 2022-12-19T18:31:32+00:00 | {"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 134598991.2416305, "num_examples": 362520}, {"name": "validation", "num_bytes": 14955814.758369517, "num_examples": 40281}], "download_size": 95987466, "dataset_size": 149554806.0}} | 2022-12-19T18:33:37+00:00 |
5b6497c9dd8af48a66402524b2ceae58044c55fe | unknownX/Eddie_cartoon | [
"license:other",
"region:us"
]
| 2022-12-19T19:22:28+00:00 | {"license": "other"} | 2022-12-19T19:23:28+00:00 |
|
a34376c527c1ff080d71cd5a863ec4e6d696133f | # Dataset Card for "muppet-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Norod78/muppet-blip-captions | [
"region:us"
]
| 2022-12-19T19:47:56+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 318180055.0, "num_examples": 976}], "download_size": 316787074, "dataset_size": 318180055.0}} | 2022-12-19T19:48:43+00:00 |
baa822d6986c4e5ea4c62f0b88d0717db3c2390f | Glac1er/Shades | [
"license:unknown",
"region:us"
]
| 2022-12-19T20:22:34+00:00 | {"license": "unknown"} | 2022-12-20T13:37:32+00:00 |
|
8c54c9f78834d91ebb4541fe55388c536735d1b6 |
# Dataset Card for IMDB Kurdish
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [https://github.com/Hrazhan/IMDB_Kurdish/](https://github.com/Hrazhan/IMDB_Kurdish/)
- **Point of Contact:** [Razhan Hameed](https://twitter.com/RazhanHameed)
- **Paper:**
- **Leaderboard:**
### Dataset Summary
Central Kurdish translation of the famous IMDB movie reviews dataset.
The dataset contains 50K highly polar movie reviews, divided into two equal classes of positive and negative reviews. We can perform binary sentiment classification using this dataset.
The availability of datasets in Kurdish, such as the IMDB movie reviews dataset, can help researchers and developers train and evaluate machine learning models for Kurdish language processing.
However, it is important to note that machine learning algorithms can only be as accurate as the data they are trained on (in this case the quality of the translation), so the quality and relevance of the dataset will affect the performance of the resulting model.
For more information about the dataset, please go through the following link: http://ai.stanford.edu/~amaas/data/sentiment/
P.S. This dataset is translated with Google Translator.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Central Kurdish
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"label": 0,
"text": ""فیلمێکی زۆر باش، کە سەرنج دەخاتە سەر پرسێکی زۆر گرنگ. نەخۆشی کحولی کۆرپەلە کەموکوڕییەکی زۆر جددی لە لەدایکبوونە کە بە تەواوی دەتوانرێت ڕێگری لێبکرێت. ئەگەر خێزانە زیاترەکان ئەم فیلمە ببینن، ڕەنگە منداڵی زیاتر وەک ئادەم کۆتاییان نەهاتبێت. جیمی سمیس لە یەکێک لە باشترین ڕۆڵەکانیدا نمایش دەکات تا ئێستا. ئەمە فیلمێکی نایاب و باشە کە خێزانێکی زۆر تایبەت لەبەرچاو دەگرێت و پێویستییەکی زۆر گرنگی هەیە. ئەمەش جیاواز نییە لە هەزاران خێزان کە ئەمڕۆ لە ئەمریکا هەن. منداڵان هەن کە لەگەڵ ئەم جیهانەدا خەبات دەکەن. بەڕاستی خاڵە گرنگەکە لێرەدا ئەوەیە کە دەکرا ڕێگری لە هەموو شتێک بکرێت. خەڵکی زیاتر دەبێ ئەم فیلمە ببینن و ئەوەی کە هەیەتی بە جددی وەریبگرێت. بە باشی ئەنجام دراوە، بە پەیامی گرنگ، بە شێوەیەکی بەڕێزانە مامەڵەی لەگەڵ دەکرێت."
}
```
### Data Fields
plain_text
text: a string feature.
label: a classification label, with possible values including neg (0), pos (1).
### Data Splits
| name |train|test|
|----------|----:|----:|
|plain_text|24903|24692|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
Thanks to [Razhan Hameed](https://twitter.com/RazhanHameed) for adding this dataset.
| razhan/imdb_ckb | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|imdb",
"language:ckb",
"language:ku",
"license:other",
"central kurdish",
"kurdish",
"sorani",
"kurdi",
"region:us"
]
| 2022-12-19T20:31:55+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ckb", "ku"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|imdb"], "task_categories": ["text-classification"], "task_ids": ["sentiment-analysis", "sentiment-classification"], "pretty_name": "IMDB_CKB", "tags": ["central kurdish", "kurdish", "sorani", "kurdi"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}], "config_name": "plain_text", "splits": [{"name": "train", "num_examples": 24903}, {"name": "test", "num_examples": 24692}]}} | 2023-01-13T17:41:39+00:00 |
280bfaff1399973dea85fe9a9fe9085ac7ca171f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: BirdL/OLM-GPT2-Yannic
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@puffy310](https://huggingface.co/puffy310) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-398e1c-2536177709 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-19T22:03:24+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev"], "eval_info": {"task": "text_zero_shot_classification", "model": "BirdL/OLM-GPT2-Yannic", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev", "dataset_config": "mathemakitten--winobias_antistereotype_dev", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-12-19T22:04:15+00:00 |
f888c2e82ccf3939b486e1ecd6b44b82b5f4594c | All the datasets from https://huggingface.co/Whispering-GPT concated together to finetune [OLM-GPT2](https://huggingface.co/Tristan/olm-gpt2-oct-2022) | BirdL/WhisperGPTFull | [
"license:apache-2.0",
"region:us"
]
| 2022-12-20T01:00:27+00:00 | {"license": "apache-2.0"} | 2022-12-20T01:04:24+00:00 |
5c3f19d29d36533e07f3cc599d902e7129be8932 | Leafly/GD_Level | [
"license:afl-3.0",
"region:us"
]
| 2022-12-20T02:19:15+00:00 | {"license": "afl-3.0"} | 2022-12-20T02:21:45+00:00 |
|
6af9b0e29d574b3d0a067c2c47ac75f8a9e256be | # Dataset Card for "DalleCatsAndDogs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | BirdL/DalleCatsAndDogs | [
"region:us"
]
| 2022-12-20T04:33:34+00:00 | {"dataset_info": {"features": [{"name": "Images", "dtype": "image"}, {"name": "class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49662722.0, "num_examples": 500}], "download_size": 49664703, "dataset_size": 49662722.0}} | 2022-12-20T04:50:34+00:00 |
dba2660b61c1a1f82372a0b1443aad3bd2922483 | # Dataset Card for "cathode-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AxuJI/cathode-1 | [
"region:us"
]
| 2022-12-20T06:25:00+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 55347464.0, "num_examples": 56}], "download_size": 51606062, "dataset_size": 55347464.0}} | 2022-12-20T07:48:02+00:00 |
175c10ffdb77fd920f8ab86b735331282be1ee18 |
# CISLR: Corpus for Indian Sign Language Recognition
This repository contains the Indian Sign Language Dataset proposed in the following paper
> **Paper:** CISLR: Corpus for Indian Sign Language Recognition https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.707/
> **Authors:** Abhinav Joshi, Ashwani Bhat, Pradeep S, Priya Gole, Shashwat Gupta, Shreyansh Agarwal, Ashutosh Modi <br>
>
> **Abstract:** *Indian Sign Language, though used by a diverse community, still lacks well-annotated resources for developing systems that would enable sign language processing. In recent years researchers have actively worked for sign languages like American Sign Languages, however, Indian Sign language is still far from data-driven tasks like machine translation. To address this gap, in this paper, we introduce a new dataset CISLR (Corpus for Indian Sign Language Recognition) for word-level recognition in Indian Sign Language using videos. The corpus has a large vocabulary of around 4700 words covering different topics and domains. Further, we propose a baseline model for word recognition from sign language videos. To handle the low resource problem in the Indian Sign Language, the proposed model consists of a prototype-based one-shot learner that leverages resource rich American Sign Language to learn generalized features for improving predictions in Indian Sign Language. Our experiments show that gesture features learned in another sign language can help perform one-shot predictions in CISLR.*
## Directory Structure
.
├── dataset.csv # list of all videos with categorical annotations
├── prototype.csv # files used as prototypes
├── test.csv # files used as testset
├── CISLR_v1.5-a_videos # dataset videos
├── __Rz2PaTB1c.mp4
├── _2TlWc7fctg.mp4
.
.
├── zZVuyuVTFW0.mp4
└── I3D_features.pkl # extracted Inception3D features
## Citation
> Abhinav Joshi, Ashwani Bhat, Pradeep S, Priya Gole, Shashwat Gupta, Shreyansh Agarwal, and Ashutosh Modi. 2022. CISLR: Corpus for Indian Sign Language Recognition. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10357–10366, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
## Acknowledgments
This project was a part of IIT Kanpur's [SURGE](https://surge.iitk.ac.in/) Initiative. | IIT-K/CISLR | [
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sgn",
"license:afl-3.0",
"Indian Sign Language",
"Sign Language Recognition",
"region:us"
]
| 2022-12-20T07:42:08+00:00 | {"language": ["sgn"], "license": ["afl-3.0"], "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "pretty_name": "CISLR", "tags": ["Indian Sign Language", "Sign Language Recognition"]} | 2022-12-20T08:39:26+00:00 |
f628640cea8e85f48246455adf44356b2fd45c32 |
# Dataset Card for "lmqg/qa_squadshifts_synthetic"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The test split is the original validation set of [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), where the model should be evaluate on.
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
TBA
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qa_squadshifts_synthetic | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-20T08:31:18+00:00 | {"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Synthetic QA dataset on SQuADShifts."} | 2023-01-15T14:25:15+00:00 |
07bb67ebf8bdcb9d22a4067280ebfa74c26d019d | # Dataset Card for "Kor_Jpn_Translation_Dataset"
### Dataset Summary
AI-Hub에서 제공하는 한국어-일본어 번역 말뭉치 데이터(https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=127)를 사용하기 쉽게 정제했습니다.
- 제공처 : AI-Hub(https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=127)
- 제목 : 한국어-일본어 문화 분야 이중 말뭉치
- 구축분야 : 문화재/향토/K-Food, K-POP(한류)/대중문화_공연 콘텐츠, IT/컴퓨터/모바일, 금융/증시, 사회/노동/복지, 교육, 특허/기술, 자동차
- 구축량 : 150만 문장쌍
- 응용분야 : 언어모델, 자동번역
- 언어 : 원시어-한국어, 목적어-일본어
### Supported Tasks and Leaderboards
- Translation
### Languages
- Kor
- Jpan
## Dataset Structure
features:
- name: KOR
dtype: string
- name: JPN
dtype: string
splits:
- name: train
num_bytes: 294787449
num_examples: 840000
- name: val
num_bytes: 88406929
num_examples: 252000
- name: test
num_bytes: 37964427
num_examples: 108000
download_size: 289307354
dataset_size: 421158805
### Data Splits
splits:
- name: train
num_bytes: 294787449
num_examples: 840000
- name: val
num_bytes: 88406929
num_examples: 252000
- name: test
num_bytes: 37964427
num_examples: 108000
### Contributions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | noahkim/Kor_Jpn_Translation_Dataset | [
"task_categories:translation",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:other",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:kor",
"language:jpn",
"license:mit",
"region:us"
]
| 2022-12-20T11:19:57+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["kor", "jpn"], "license": ["mit"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": ["language-modeling"], "pretty_name": "Kor-Jpn-Translation"} | 2022-12-20T12:03:22+00:00 |
4d8aad0c2f2625bf60af171949633ad76ca0b921 |
# Dataset Card for ScandiReddit
## Dataset Description
- **Repository:** <https://github.com/alexandrainst/ScandiReddit>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
### Dataset Summary
ScandiReddit is a filtered and post-processed corpus consisting of comments from [Reddit](https://reddit.com/).
All Reddit comments from December 2005 up until October 2022 were downloaded through [PushShift](https://files.pushshift.io/reddit/comments/), after which these were filtered based on the FastText language detection model. Any comment which was classified as Danish (`da`), Norwegian (`no`), Swedish (`sv`) or Icelandic (`is`) with a confidence score above 70% was kept.
The resulting comments were then deduplicated, removing roughly 438,000 comments. 5,000 comments written by Reddit bots were removed, and roughly 189,000 comments belonging to inappropriate subreddits (explicit and drug-related) were also removed.
Lastly, we remove roughly 40,000 near-duplicate comments from the resulting corpus, where near-duplicate here means that the comments have more than 80% of their word 5-grams in common.
### Supported Tasks and Leaderboards
Training language models is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian (`no`) and Icelandic (`is`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
An example from the dataset looks as follows.
```
{
'doc': 'Bergen er ødelagt. Det er ikke moro mer.',
'subreddit': 'Norway',
'language': 'da',
'language_confidence': 0.7472341656684875
}
```
### Data Fields
The data fields are the same among all splits.
- `doc`: a `string` feature.
- `subreddit`: a `string` feature.
- `language`: a `string` feature.
- `language_confidence`: a `float64` feature.
### Language Distribution
| name | count |
|----------|---------:|
| sv | 6,967,420 |
| da | 4,965,195 |
| no | 1,340,470 |
| is | 206,689 |
| total | 13,479,774 |
### Top-50 Subreddit Distribution
| name | count |
|----------|--------:|
|sweden |4,881,483|
|Denmark |3,579,178|
|norge |1,281,655|
|svenskpolitik | 771,960|
|InfluencergossipDK | 649,910|
|swedishproblems | 339,683|
|Iceland | 183,488|
|dkfinance | 113,860|
|unket | 81,077|
|DanishEnts | 69,055|
|dankmark | 62,928|
|swedents | 58,576|
|scandinavia | 57,136|
|Allsvenskan | 56,006|
|Gothenburg | 54,395|
|stockholm | 51,016|
|ISKbets | 47,944|
|Sverige | 39,552|
|SWARJE | 34,691|
|GossipDK | 29,332|
|NorskFotball | 28,571|
|Superligaen | 23,641|
|Aarhus | 22,516|
|Svenska | 20,561|
|newsdk | 19,893|
|AskReddit | 16,672|
|copenhagen | 16,668|
|okpolarncp | 16,583|
|SwedditUniversalis | 15,990|
|Sveriges_politik | 15,058|
|intresseklubben | 13,246|
|Aktiemarknaden | 13,202|
|soccer | 12,637|
|teenagers | 10,845|
|Norway | 10,680|
|europe | 10,247|
|Matinbum | 9,792|
|oslo | 9,650|
|iksdagen | 9,232|
|Asksweddit | 8,851|
|Forsvaret | 8,641|
|Sverigesforsvarsmakt | 8,469|
|memes | 8,299|
|Danish | 8,268|
|DANMAG | 8,214|
|PewdiepieSubmissions | 7,800|
|sweddpolitik | 7,646|
|pinsamt | 7,318|
|arbetarrorelsen | 7,317|
|Ishockey | 6,824|
## Dataset Creation
### Curation Rationale
The Scandinavian languages do not have many open source social media datasets.
### Source Data
The raw Reddit data was collected through [PushShift](https://files.pushshift.io/reddit/comments/).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY 4.0
license](https://creativecommons.org/licenses/by/4.0/).
| alexandrainst/scandi-reddit | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"language:da",
"language:sv",
"language:no",
"language:is",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-20T12:13:19+00:00 | {"language": ["da", "sv", false, "is"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling"], "pretty_name": "ScandiReddit"} | 2022-12-21T17:54:31+00:00 |
1273da4d03c607109fe9575e6aaf6063cb044988 |
# About Dataset
Philosophers Quotes from azquotes.com
* Arthur Schopenhauer 400+ quotes
* Friedrich Nietzsche 200+ quotes
* Immanuel Kant 300+ quotes
* Aristotle 350+ quotes
* Plato 70+ quotes
* Sigmund Freud 400+ quotes
* Hegel 120+ quotes
* Jean Paul Sartre 320+ quotes
* Spinoza 120+ quotes
### COLLECTION METHODOLOGY
Python Web Scraping with Selenium | mertbozkurt/quotes_philosophers | [
"license:afl-3.0",
"region:us"
]
| 2022-12-20T13:13:09+00:00 | {"license": "afl-3.0"} | 2022-12-20T13:17:12+00:00 |
1454bcf9d92442c5d5d0dc9004315010950061e4 | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | threite/github-issues | [
"region:us"
]
| 2022-12-20T13:18:01+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "labels", "list": [{"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "assignees", "list": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "milestone", "struct": [{"name": "closed_at", "dtype": "string"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "creator", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "description", "dtype": "string"}, {"name": "due_on", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "labels_url", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "open_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "comments", "sequence": "null"}, {"name": "created_at", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "closed_at", "dtype": "string"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "diff_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "merged_at", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "total_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 16275865, "num_examples": 5392}], "download_size": 3809038, "dataset_size": 16275865}} | 2022-12-20T13:18:23+00:00 |
4c98780bd4228a273bf5d240e2ccee699dc41825 | # Dataset Card for "slue-voxceleb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | qmeeus/slue-voxceleb | [
"region:us"
]
| 2022-12-20T13:24:19+00:00 | {"dataset_info": {"features": [{"name": "n_frames", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "sentiment", "dtype": {"class_label": {"names": {"0": "Negative", "1": "Neutral", "2": "Positive"}}}}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 756214858.625, "num_examples": 5729}, {"name": "dev", "num_bytes": 130698641.0, "num_examples": 954}], "download_size": 949197313, "dataset_size": 886913499.625}} | 2022-12-20T13:25:43+00:00 |
e2a26bd1bc6d1ff15a33ab9d2ed118054151a784 |
# Dataset Card for Czech Simple Question Answering Dataset 3.0
This a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see `Dataset Source` section.
## Dataset Description
The data contains questions and answers based on Czech wikipeadia articles.
Each question has an answer (or more) and a selected part of the context as the evidence.
A majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are
- yes/no questions
- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)
- answered in own words (should be rare, but is not)
All questions in the dataset are answerable from the context. Small minority of questions have multiple answers.
Sometimes it means that any of them is correct (e.g. either "Pacifik" or "Tichý oceán" are correct terms for Pacific Ocean)
and sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? ["painter", "engineer"])
Total number of examples is around:
- 6,250 in train
- 570 in validation
- 850 in test.
## Dataset Features
Each example contains:
- `item_id`: string id of the
- `context`: "reasonably" big chunk (string) of wikipedia article that contains the answer
- `question`: string
- `answers`: list of all answers (string). mostly list of length 1
- `evidence_text`: substring of context (typically one sentence) that is sufficient to answer the question
- `evidence_start`: index in context, such that `context[evidence_start:evidence_end] == evidence_text`
- `evidence_end`: index in context
- `occurences`:
list of (dictionaries) occurences of the answer(s) in the evidence.
Each answer was searched with word boundaries ("\b" in regex) and case-sensitive in the evidence.
If nothing found, try again but case-insensitive.
If nothing found, try again but case-sensitive without word boundaries.
If nothing found, try again but case-insensitive without word boundaries.
This process should supress "false positive" occurences of the answer in the evidence.
- `start`: index in context
- `end`: index in context
- `text`: the answer looked for
- `url`: link to the wikipedia article
- `original_article`: original parsed wikipedia article from which the context is taken
- `question_type`: type of the question, one of: ['ABBREVIATION', 'DATETIME', 'DENOTATION', 'ENTITY', 'LOCATION', 'NUMERIC', 'ORGANIZATION', 'OTHER', 'PERSON', 'YES_NO']
- `answer_type`: type of the answer, one of: ['ABBREVIATION', 'ADJ_PHRASE', 'CLAUSE', 'DATETIME', 'ENTITY', 'LOCATION', 'NUMERIC', 'OTHER', 'PERSON', 'VERB_PHRASE']
## Dataset Source
The dataset is a preprocessed adaptation of existing SQAD 3.0 dataset [link to data](https://lindat.cz/repository/xmlui/handle/11234/1-3069).
This adaptation contains (almost) same data, but converted to a convenient format.
The data was also filtered to remove a statistical bias where the answer was contained
in the first sentence in the article (around 50% of all data in the original dataset, likely
caused by the data collection process).
## Citation
Cite authors of the [original dataset](https://lindat.cz/repository/xmlui/handle/11234/1-3069):
```bibtex
@misc{11234/1-3069,
title = {sqad 3.0},
author = {Medve{\v d}, Marek and Hor{\'a}k, Ale{\v s}},
url = {http://hdl.handle.net/11234/1-3069},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {{GNU} Library or "Lesser" General Public License 3.0 ({LGPL}-3.0)},
year = {2019}
}
```
| fewshot-goes-multilingual/cs_squad-3.0 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:cs",
"license:lgpl-3.0",
"czech QA",
"wikipedia QA",
"region:us"
]
| 2022-12-20T13:50:51+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["cs"], "license": ["lgpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Czech Simple Question Answering Dataset", "tags": ["czech QA", "wikipedia QA"]} | 2023-11-26T20:42:44+00:00 |
c3e95a310bf4df2d6bbb72582aedb698e9c36a19 | # Dataset Card for "fakefractals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fenrirgochad/fakefractals | [
"region:us"
]
| 2022-12-20T13:52:26+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "4kfractals", "1": "babies", "2": "babies2", "3": "realfractals"}}}}, {"name": "pixel_values", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 31691574.875, "num_examples": 1561}], "download_size": 31681569, "dataset_size": 31691574.875}} | 2022-12-20T14:04:47+00:00 |
8a1ad836c8124e7d8098587d30e924e7506f55d1 | # Dataset Card for "yannic-kilcher-transcript"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | matallanas/yannic-kilcher-transcript | [
"region:us"
]
| 2022-12-20T14:26:44+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24560830, "num_examples": 370}], "download_size": 12784371, "dataset_size": 24560830}} | 2022-12-20T14:26:59+00:00 |
9c3fe5dcc00a5ee3bcfdb6936cbb770ef3c26dfd |
# Dataset Card for BasqueGLUE
## Table of Contents
* [Table of Contents](#table-of-contents)
* [Dataset Description](#dataset-description)
* [Dataset Summary](#dataset-summary)
* [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
* [Languages](#languages)
* [Dataset Structure](#dataset-structure)
* [Data Instances](#data-instances)
* [Data Fields](#data-fields)
* [Data Splits](#data-splits)
* [Dataset Creation](#dataset-creation)
* [Curation Rationale](#curation-rationale)
* [Source Data](#source-data)
* [Annotations](#annotations)
* [Personal and Sensitive Information](#personal-and-sensitive-information)
* [Considerations for Using the Data](#considerations-for-using-the-data)
* [Social Impact of Dataset](#social-impact-of-dataset)
* [Discussion of Biases](#discussion-of-biases)
* [Other Known Limitations](#other-known-limitations)
* [Additional Information](#additional-information)
* [Dataset Curators](#dataset-curators)
* [Licensing Information](#licensing-information)
* [Citation Information](#citation-information)
* [Contributions](#contributions)
## Dataset Description
* **Repository:** <https://github.com/orai-nlp/BasqueGLUE>
* **Paper:** [BasqueGLUE: A Natural Language Understanding Benchmark for Basque](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.172.pdf)
* **Point of Contact:** [Contact Information](https://github.com/orai-nlp/BasqueGLUE#contact-information)
### Dataset Summary
Natural Language Understanding (NLU) technology has improved significantly over the last few years, and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages.
We present BasqueGLUE, the first NLU benchmark for Basque, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. BasqueGLUE is freely available under an open license.
| Dataset | \|Train\| | \|Val\| | \|Test\| | Task | Metric | Domain |
|----------------|----------:|--------:|---------:|------------------------|:------:|-----------------|
| NERCid | 51,539 | 12,936 | 35,855 | NERC | F1 | News |
| NERCood | 64,475 | 14,945 | 14,462 | NERC | F1 | News, Wikipedia |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 | Intent classification | F1 | Dialog system |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 | Slot filling | F1 | Dialog system |
| BHTCv2 | 8,585 | 1,857 | 1,854 | Topic classification | F1 | News |
| BEC2016eu | 6,078 | 1,302 | 1,302 | Sentiment analysis | F1 | Twitter |
| VaxxStance | 864 | 206 | 312 | Stance detection | MF1* | Twitter |
| QNLIeu | 1,764 | 230 | 238 | QA/NLI | Acc | Wikipedia |
| WiCeu | 408,559 | 600 | 1,400 | WSD | Acc | Wordnet |
| EpecKorrefBin | 986 | 320 | 587 | Coreference resolution | Acc | News |
### Supported Tasks and Leaderboards
This benchmark comprises the following tasks:
#### NERCid
This dataset contains sentences from the news domain with manually annotated named entities. The data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous.
#### NERCood
This dataset contains sentences with manually annotated named entities. The training data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous. For validation and test sets, sentences from Wikipedia were annotated following the same annotation guidelines.
#### FMTODeu_intent
This dataset contains utterance texts and intent annotations drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The examples are annotated with one of 12 different intent classes corresponding to alarm, reminder or weather related actions.
#### FMTODeu_slot
This dataset contains utterance texts and sequence intent argument annotations designed for slot filling tasks, drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The task is a sequence labelling task similar to NERC, following BIO annotation scheme over 11 categories.
#### BHTCv2
The corpus contains 12,296 news headlines (brief article descriptions) from the Basque weekly newspaper [Argia](https://www.argia.eus). Topics are classified uniquely according to twelve thematic categories.
#### BEC2016eu
The Basque Election Campaign 2016 Opinion Dataset (BEC2016eu) is a new dataset for the task of sentiment analysis, a sequence classification task, which contains tweets about the campaign for the Basque elections from 2016. The crawling was carried out during the election campaign period (2016/09/09-2016/09/23), by monitoring the main parties and their respective candidates. The tweets were manually annotated as positive, negative or neutral.
#### VaxxStance
The VaxxStance (Agerri et al., 2021) dataset originally provides texts and stance annotations for social media texts around the anti-vaccine movement. Texts are given a label indicating whether they express an AGAINST, FAVOR or NEUTRAL stance towards the topic.
#### QNLIeu
This task includes the QA dataset ElkarHizketak (Otegi et al. 2020), a low resource conversational Question Answering (QA) dataset for Basque created by native speaker volunteers. The dataset is built on top of Wikipedia sections about popular people and organizations, and it contains around 400 dialogues and 1600 question and answer pairs. The task was adapted into a sentence-pair binary classification task, following the design of QNLI for English (Wang et al. 2019). Each question and answer pair are given a label indicating whether the answer is entailed by the question.
#### WiCeu
Word in Context or WiC (Pilehvar and Camacho-Collados 2019) is a word sense disambiguation (WSD) task, designed as a particular form of sentence pair binary classification. Given two text snippets and a polyse mous word that appears in both of them (the span of the word is marked in both snippets), the task is to determine whether the word has the same sense in both sentences. This dataset is based on the EPEC-EuSemcor (Pociello et al. 2011) sense-tagged corpus.
#### EpecKorrefBin
EPEC-KORREF-Bin is a dataset derived from EPEC-KORREF (Soraluze et al. 2012), a corpus of Basque news documents with manually annotated mentions and coreference chains, which we have been converted into a binary classification task. In this task, the model has to predict whether two mentions from a text, which can be pronouns, nouns or noun phrases, are referring to the same entity.
#### Leaderboard
Results obtained for two BERT base models as a baseline for the Benchmark.
| | AVG | NERC | F_intent | F_slot | BHTC | BEC | Vaxx | QNLI | WiC | coref |
|------------------------------------------------------------|:-----:|:-----:|:---------:|:-------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| Model | | F1 | F1 | F1 | F1 | F1 | MF1 | acc | acc | acc |
|[BERTeus](https://huggingface.co/ixa-ehu/berteus-base-cased)| 73.23 | 81.92 | 82.52 | 74.34 | 78.26 | 69.43 | 59.30 | 74.26 | 70.71 | 68.31 |
|[ElhBERTeu](https://huggingface.co/elh-eus/ElhBERTeu) | 73.71 | 82.30 | 82.24 | 75.64 | 78.05 | 69.89 | 63.81 | 73.84 | 71.71 | 65.93 |
The results obtained on NERC are the average of in-domain and out-of-domain NERC.
### Languages
Data are available in Basque (BCP-47 `eu`)
## Dataset Structure
### Data Instances
#### NERCid/NERCood
An example of 'train' looks as follows:
```
{
"idx": 0,
"tags": ["O", "O", "O", "O", "B-ORG", "O", ...],
"tokens": ["Greba", "orokorrera", "deitu", "du", "EHk", "27rako", ...]
}
```
#### FMTODeu_intent
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "alarm/modify_alarm",
"text": "aldatu alarma 7am-tik 7pm-ra , mesedez"
}
```
#### FMTODeu_slot
An example of 'train' looks as follows:
```
{
"idx": 923,
"tags": ["O", "B-reminder/todo", "I-datetime", "I-datetime", "B-reminder/todo"],
"tokens": ["gogoratu", "zaborra", "gaur", "gauean", "ateratzea"]
}
```
#### BHTCv2
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "Gizartea",
"text": "Genero berdintasunaz, hezkuntzaz eta klase gizarteaz hamar liburu baino gehiago..."
}
```
#### BEC2016eu
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "NEU",
"text": '"Emandako hitza bete egingo dut" Urkullu\nBa galdeketa enegarrenez daramazue programan (ta zuen AHTa...)\n#I25debatea #URL"'
}
```
#### VaxxStance
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "FAVOR",
"text": "\"#COVID19 Oraingo datuak, izurriaren dinamika, txertoaren eragina eta birusaren..
}
```
#### QNLIeu
An example of 'train' looks as follows:
```
{
"idx": 1,
"label": "not_entailment",
"question": "Zein posiziotan jokatzen du Busquets-ek?",
"sentence": "Busquets 23 partidatan izan zen konbokatua eta 2 gol sartu zituen."
}
```
#### WiCeu
An example of 'test' looks as follows:
```
{
"idx": 16,
"label": false,
"word": "udal",
"sentence1": "1a . Lekeitioko udal mugarteko Alde Historikoa Birgaitzeko Plan Berezia behin...",
"sentence2": "Diezek kritikatu egin zuen EAJk zenbait udaletan EH gobernu taldeetatik at utzi...",
"start1": 16,
"start2": 40,
"end1": 21,
"end2": 49
}
```
#### EpecKorrefBin
An example of 'train' looks as follows:
```
{
"idx": 6,
"label": false,
"text": "Isuntza da faborito nagusia Elantxobeko banderan . ISUNTZA trainerua da faborito nagusia bihar Elantxoben jokatuko den bandera irabazteko .",
"span1_text": "Elantxobeko banderan",
"span2_text": "ISUNTZA trainerua",
"span1_index": 4,
"span2_index": 8
}
```
### Data Fields
#### NERCid
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### NERCood
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### FMTODeu_intent
* `text`: a `string` feature
* `label`: an intent label, with possible values including:
* `alarm/cancel_alarm`
* `alarm/modify_alarm`
* `alarm/set_alarm`
* `alarm/show_alarms`
* `alarm/snooze_alarm`
* `alarm/time_left_on_alarm`
* `reminder/cancel_reminder`
* `reminder/set_reminder`
* `reminder/show_reminders`
* `weather/checkSunrise`
* `weather/checkSunset`
* `weather/find`
* `idx`: an `int32` feature
#### FMTODeu_slot
* `tokens`: a list of `string` features
* `tags`: a list of intent labels, with possible values including:
* `datetime`
* `location`
* `negation`
* `alarm/alarm_modifier`
* `alarm/recurring_period`
* `reminder/noun`
* `reminder/todo`
* `reminder/reference`
* `reminder/recurring_period`
* `weather/attribute`
* `weather/noun`
* `idx`: an `int32` feature
#### BHTCv2
* `text`: a `string` feature
* `label`: a polarity label, with possible values including `neutral` (NEU), `negative` (N), `positive` (P)
* `idx`: an `int32` feature
#### BEC2016eu
* `text`: a `string` feature
* `label`: a topic label, with possible values including:
* `Ekonomia`
* `Euskal Herria`
* `Euskara`
* `Gizartea`
* `Historia`
* `Ingurumena`
* `Iritzia`
* `Komunikazioa`
* `Kultura`
* `Nazioartea`
* `Politika`
* `Zientzia`
* `idx`: an `int32` feature
#### VaxxStance
* `text`: a `string` feature
* `label`: a stance label, with possible values including `AGAINST`, `FAVOR`, `NONE`
* `idx`: an `int32` feature
#### QNLIeu
* `question`: a `string` feature
* `sentence`: a `string` feature
* `label`: an entailment label, with possible values including `entailment`, `not_entailment`
* `idx`: an `int32` feature
#### WiCeu
* `word`: a `string` feature
* `sentence1`: a `string` feature
* `sentence2`: a `string` feature
* `label`: a `boolean` label indicating sense agreement, with possible values including `true`, `false`
* `start1`: an `int` feature indicating character position where word occurence begins in first sentence
* `start2`: an `int` feature indicating character position where word occurence begins in second sentence
* `end1`: an `int` feature indicating character position where word occurence ends in first sentence
* `end2`: an `int` feature indicating character position where word occurence ends in second sentence
* `idx`: an `int32` feature
#### EpecKorrefBin
* `text`: a `string` feature.
* `label`: a `boolean` coreference label, with possible values including `true`, `false`.
* `span1_text`: a `string` feature
* `span2_text`: a `string` feature
* `span1_index`: an `int` feature indicating token index where `span1_text` feature occurs in `text`
* `span2_index`: an `int` feature indicating token index where `span2_text` feature occurs in `text`
* `idx`: an `int32` feature
### Data Splits
| Dataset | \|Train\| | \|Val\| | \|Test\| |
|---------|--------:|------:|-------:|
| NERCid | 51,539 | 12,936 | 35,855 |
| NERCood | 64,475 | 14,945 | 14,462 |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 |
| BHTCv2 | 8,585 | 1,857 | 1,854 |
| BEC2016eu | 6,078 | 1,302 | 1,302 |
| VaxxStance | 864 | 206 | 312 |
| QNLIeu | 1,764 | 230 | 238 |
| WiCeu | 408,559 | 600 | 1,400 |
| EpecKorrefBin | 986 | 320 | 587 |
## Dataset Creation
### Curation Rationale
We believe that BasqueGLUE is a significant contribution towards developing NLU tools in Basque, which we believe will facilitate the technological advance for the Basque language. In order to create BasqueGLUE we took as a reference the GLUE and SuperGLUE frameworks. When possible, we re-used existing datasets for Basque, adapting them to the corresponding task formats if necessary. Additionally, BasqueGLUE also includes six new datasets that have not been published before. In total, BasqueGLUE consists of nine Basque NLU tasks and covers a wide range of tasks with different difficulties across several domains. As with the original GLUE benchmark, the training data for the tasks vary in size, which allows to measure the performance of how the models transfer knowledge across tasks.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Gorka Urbizu [1], Iñaki San Vicente [1], Xabier Saralegi [1], Rodrigo Agerri [2] and Aitor Soroa [2]
Affiliation of the authors:
[1] orai NLP Technologies
[2] HiTZ Center - Ixa, University of the Basque Country UPV/EHU
### Licensing Information
Each dataset of the BasqueGLUE benchmark has it's own license (due to most of them being or being derived from already existing datasets). See their respective README files for details.
Here we provide a brief summary of their licenses:
| Dataset | License |
|---------|---------|
| NERCid | CC BY-NC-SA 4.0 |
| NERCood | CC BY-NC-SA 4.0 |
| FMTODeu_intent | CC BY-NC-SA 4.0 |
| FMTODeu_slot | CC BY-NC-SA 4.0 |
| BHTCv2 | CC BY-NC-SA 4.0 |
| BEC2016eu | Twitter's license + CC BY-NC-SA 4.0 |
| VaxxStance | Twitter's license + CC BY 4.0 |
| QNLIeu | CC BY-SA 4.0 |
| WiCeu | CC BY-NC-SA 4.0 |
| EpecKorrefBin | CC BY-NC-SA 4.0 |
For the rest of the files of the benchmark, including the loading and evaluation scripts, the following license applies:
Copyright (C) by Orai NLP Technologies.
This benchmark and evaluation scripts are licensed under the Creative Commons Attribution Share Alike 4.0
International License (CC BY-SA 4.0). To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
### Citation Information
```
@InProceedings{urbizu2022basqueglue,
author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor},
title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {1603--1612},
abstract = {Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.},
url = {https://aclanthology.org/2022.lrec-1.172}
}
```
### Contributions
Thanks to [@richplant](https://github.com/richplant) for adding this dataset to hugginface.
| orai-nlp/basqueGLUE | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:intent-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:coreference-resolution",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:eu",
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2022-12-20T14:28:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["eu"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["intent-classification", "natural-language-inference", "sentiment-classification", "topic-classification", "named-entity-recognition", "coreference-resolution"], "pretty_name": "BasqueGLUE", "tags": [], "configs": ["bec", "bhtc", "coref", "intent", "nerc_id", "nerc_od", "qnli", "slot", "vaxx", "wic"], "dataset_info": [{"config_name": "bec", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "N", "1": "NEU", "2": "P"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 693284, "num_examples": 6078}, {"name": "test", "num_bytes": 148510, "num_examples": 1302}, {"name": "validation", "num_bytes": 148377, "num_examples": 1302}], "download_size": 1217803, "dataset_size": 990171}, {"config_name": "bhtc", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Ekonomia", "1": "Euskal Herria", "2": "Euskara", "3": "Gizartea", "4": "Historia", "5": "Ingurumena", "6": "Iritzia", "7": "Komunikazioa", "8": "Kultura", "9": "Nazioartea", "10": "Politika", "11": "Zientzia"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 2431494, "num_examples": 8585}, {"name": "test", "num_bytes": 523066, "num_examples": 1854}, {"name": "validation", "num_bytes": 519555, "num_examples": 1857}], "download_size": 3896312, "dataset_size": 3474115}, {"config_name": "coref", "features": [{"name": "text", "dtype": "string"}, {"name": "span1_text", "dtype": "string"}, {"name": "span2_text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "span1_index", "dtype": "int32"}, {"name": "span2_index", "dtype": "int32"}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 365830, "num_examples": 986}, {"name": "test", "num_bytes": 201378, "num_examples": 587}, {"name": "validation", "num_bytes": 108632, "num_examples": 320}], "download_size": 855074, "dataset_size": 675840}, {"config_name": "intent", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "alarm/cancel_alarm", "1": "alarm/modify_alarm", "2": "alarm/set_alarm", "3": "alarm/show_alarms", "4": "alarm/snooze_alarm", "5": "alarm/time_left_on_alarm", "6": "reminder/cancel_reminder", "7": "reminder/set_reminder", "8": "reminder/show_reminders", "9": "weather/checkSunrise", "10": "weather/checkSunset", "11": "weather/find"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 182856, "num_examples": 3418}, {"name": "test", "num_bytes": 56118, "num_examples": 1087}, {"name": "validation", "num_bytes": 101644, "num_examples": 1904}], "download_size": 595375, "dataset_size": 340618}, {"config_name": "nerc_id", "features": [{"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-LOC", "4": "I-LOC", "5": "B-ORG", "6": "I-ORG", "7": "B-MISC", "8": "I-MISC"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 946007, "num_examples": 2842}, {"name": "test", "num_bytes": 653960, "num_examples": 1846}, {"name": "validation", "num_bytes": 237464, "num_examples": 711}], "download_size": 1723325, "dataset_size": 1837431}, {"config_name": "nerc_od", "features": [{"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-LOC", "4": "I-LOC", "5": "B-ORG", "6": "I-ORG", "7": "B-MISC", "8": "I-MISC"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 1183471, "num_examples": 3553}, {"name": "test", "num_bytes": 262853, "num_examples": 598}, {"name": "validation", "num_bytes": 270028, "num_examples": 601}], "download_size": 1613369, "dataset_size": 1716352}, {"config_name": "qnli", "features": [{"name": "question", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 327189, "num_examples": 1764}, {"name": "test", "num_bytes": 42569, "num_examples": 238}, {"name": "validation", "num_bytes": 46359, "num_examples": 230}], "download_size": 532399, "dataset_size": 416117}, {"config_name": "slot", "features": [{"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-datetime", "2": "B-location", "3": "B-negation", "4": "B-alarm/alarm_modifier", "5": "B-alarm/recurring_period", "6": "B-reminder/noun", "7": "B-reminder/todo", "8": "B-reminder/reference", "9": "B-reminder/recurring_period", "10": "B-weather/attribute", "11": "B-weather/noun", "12": "I-datetime", "13": "I-location", "14": "I-negation", "15": "I-alarm/alarm_modifier", "16": "I-alarm/recurring_period", "17": "I-reminder/noun", "18": "I-reminder/todo", "19": "I-reminder/reference", "20": "I-reminder/recurring_period", "21": "I-weather/attribute", "22": "I-weather/noun"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 388774, "num_examples": 3418}, {"name": "test", "num_bytes": 114876, "num_examples": 1088}, {"name": "validation", "num_bytes": 214053, "num_examples": 1900}], "download_size": 962250, "dataset_size": 717703}, {"config_name": "vaxx", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AGAINST", "1": "NONE", "2": "FAVOR"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 176436, "num_examples": 864}, {"name": "test", "num_bytes": 70947, "num_examples": 312}, {"name": "validation", "num_bytes": 42795, "num_examples": 206}], "download_size": 333997, "dataset_size": 290178}, {"config_name": "wic", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "word", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "start1", "dtype": "int32"}, {"name": "start2", "dtype": "int32"}, {"name": "end1", "dtype": "int32"}, {"name": "end2", "dtype": "int32"}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 172847108, "num_examples": 408559}, {"name": "test", "num_bytes": 589578, "num_examples": 1400}, {"name": "validation", "num_bytes": 251549, "num_examples": 600}], "download_size": 22938354, "dataset_size": 173688235}]} | 2022-12-21T09:54:32+00:00 |
b3cc21f5b18a7628a37b8db1745217b3be841c46 | johnowhitaker/Pseudagrilus | [
"license:mit",
"region:us"
]
| 2022-12-20T14:43:35+00:00 | {"license": "mit"} | 2022-12-20T14:46:24+00:00 |
|
7eb458284a4cf38dd8baf9f2697b46682b9e168b | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
### Dataset Summary
This data set contains over 6,000 medical terms and their wikipedia text. It is intended to be used on a downstream task that requires medical terms and their wikipedia explanation.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
### Citation Information
[More Information Needed]
| gamino/wiki_medical_terms | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"size_categories:1K<n<10K",
"language:en",
"license:gpl-3.0",
"medical",
"conditions",
"region:us"
]
| 2022-12-20T15:25:02+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["gpl-3.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Medical terms and their wikipedia text", "tags": ["medical", "conditions"]} | 2022-12-20T16:23:58+00:00 |
bc9a9801c2950301ace2f07d7574a7bcb5f75d3a | # Dataset Card for "banking77_openai_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | argilla/banking77_openai_embeddings | [
"region:us"
]
| 2022-12-20T17:45:38+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "activate_my_card", "1": "age_limit", "2": "apple_pay_or_google_pay", "3": "atm_support", "4": "automatic_top_up", "5": "balance_not_updated_after_bank_transfer", "6": "balance_not_updated_after_cheque_or_cash_deposit", "7": "beneficiary_not_allowed", "8": "cancel_transfer", "9": "card_about_to_expire", "10": "card_acceptance", "11": "card_arrival", "12": "card_delivery_estimate", "13": "card_linking", "14": "card_not_working", "15": "card_payment_fee_charged", "16": "card_payment_not_recognised", "17": "card_payment_wrong_exchange_rate", "18": "card_swallowed", "19": "cash_withdrawal_charge", "20": "cash_withdrawal_not_recognised", "21": "change_pin", "22": "compromised_card", "23": "contactless_not_working", "24": "country_support", "25": "declined_card_payment", "26": "declined_cash_withdrawal", "27": "declined_transfer", "28": "direct_debit_payment_not_recognised", "29": "disposable_card_limits", "30": "edit_personal_details", "31": "exchange_charge", "32": "exchange_rate", "33": "exchange_via_app", "34": "extra_charge_on_statement", "35": "failed_transfer", "36": "fiat_currency_support", "37": "get_disposable_virtual_card", "38": "get_physical_card", "39": "getting_spare_card", "40": "getting_virtual_card", "41": "lost_or_stolen_card", "42": "lost_or_stolen_phone", "43": "order_physical_card", "44": "passcode_forgotten", "45": "pending_card_payment", "46": "pending_cash_withdrawal", "47": "pending_top_up", "48": "pending_transfer", "49": "pin_blocked", "50": "receiving_money", "51": "Refund_not_showing_up", "52": "request_refund", "53": "reverted_card_payment?", "54": "supported_cards_and_currencies", "55": "terminate_account", "56": "top_up_by_bank_transfer_charge", "57": "top_up_by_card_charge", "58": "top_up_by_cash_or_cheque", "59": "top_up_failed", "60": "top_up_limits", "61": "top_up_reverted", "62": "topping_up_by_card", "63": "transaction_charged_twice", "64": "transfer_fee_charged", "65": "transfer_into_account", "66": "transfer_not_received_by_recipient", "67": "transfer_timing", "68": "unable_to_verify_identity", "69": "verify_my_identity", "70": "verify_source_of_funds", "71": "verify_top_up", "72": "virtual_card_not_working", "73": "visa_or_mastercard", "74": "why_verify_identity", "75": "wrong_amount_of_cash_received", "76": "wrong_exchange_rate_for_cash_withdrawal"}}}}, {"name": "vectors", "struct": [{"name": "openai-text-embedding-ada-002", "sequence": "float64"}]}], "splits": [{"name": "test", "num_bytes": 1235118, "num_examples": 100}], "download_size": 1186634, "dataset_size": 1235118}} | 2022-12-20T17:45:54+00:00 |
8457eadd105a564f2de361ca1e63fd5517df7f03 | # Dataset Card for "broadclass_totaldataset_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totaldataset_0 | [
"region:us"
]
| 2022-12-20T18:32:33+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163060617.0, "num_examples": 389}, {"name": "test", "num_bytes": 41751285.0, "num_examples": 98}], "download_size": 137862429, "dataset_size": 204811902.0}} | 2022-12-22T14:09:57+00:00 |
54d97353ba2f6ef40ceda3bd923b673ecd97af8f | # Dataset Card for "broadclass_totalMapped0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totalMapped0 | [
"region:us"
]
| 2022-12-20T18:32:54+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108739968, "num_examples": 389}, {"name": "test", "num_bytes": 27842684, "num_examples": 98}], "download_size": 137407543, "dataset_size": 136582652}} | 2022-12-22T14:10:49+00:00 |
aa642a3ad6336c570f8f057df3dceea4b0d0bdb4 | # Dataset Card for "broadclass_totaldataset_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totaldataset_1 | [
"region:us"
]
| 2022-12-20T18:33:25+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163909442.0, "num_examples": 389}, {"name": "test", "num_bytes": 40149082.0, "num_examples": 98}], "download_size": 137402351, "dataset_size": 204058524.0}} | 2022-12-22T14:11:34+00:00 |
1e2eaa38592bf03531d58aafbd20aba7c00b3630 | # Dataset Card for "broadclass_totalMapped1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totalMapped1 | [
"region:us"
]
| 2022-12-20T18:33:37+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 109305676, "num_examples": 389}, {"name": "test", "num_bytes": 26774060, "num_examples": 98}], "download_size": 136824488, "dataset_size": 136079736}} | 2022-12-22T14:12:25+00:00 |
45df2db80f1a880f55c41bc0556f4c512540bc68 | # Dataset Card for "broadclass_totaldataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totaldataset_2 | [
"region:us"
]
| 2022-12-20T18:34:08+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163848386.0, "num_examples": 390}, {"name": "test", "num_bytes": 40722720.0, "num_examples": 97}], "download_size": 137727655, "dataset_size": 204571106.0}} | 2022-12-22T14:13:16+00:00 |
09f54173a4777673bbfc9b1f0f63994da181d5f9 | # Dataset Card for "broadclass_totalMapped2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totalMapped2 | [
"region:us"
]
| 2022-12-20T18:34:21+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 109265512, "num_examples": 390}, {"name": "test", "num_bytes": 27156588, "num_examples": 97}], "download_size": 137259978, "dataset_size": 136422100}} | 2022-12-22T14:14:18+00:00 |
c4bb8f894d94d9127cd0e5b3f26cf52edef2b0a7 | # Dataset Card for "broadclass_totaldataset_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totaldataset_3 | [
"region:us"
]
| 2022-12-20T18:34:55+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 164258465.0, "num_examples": 390}, {"name": "test", "num_bytes": 41859040.0, "num_examples": 97}], "download_size": 138753084, "dataset_size": 206117505.0}} | 2022-12-22T14:15:06+00:00 |
707943f2ae3d56e51e6310eaffdd9b8be110f910 | # Dataset Card for "broadclass_totalMapped3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totalMapped3 | [
"region:us"
]
| 2022-12-20T18:35:16+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 109539072, "num_examples": 390}, {"name": "test", "num_bytes": 27914744, "num_examples": 97}], "download_size": 138277700, "dataset_size": 137453816}} | 2022-12-22T14:16:02+00:00 |
65db117afa540ff4a3f123a7812e8050b9cd84ad | # Dataset Card for "broadclass_totaldataset_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totaldataset_4 | [
"region:us"
]
| 2022-12-20T18:35:42+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 164137813.0, "num_examples": 390}, {"name": "test", "num_bytes": 41046167.0, "num_examples": 97}], "download_size": 137497490, "dataset_size": 205183980.0}} | 2022-12-22T14:16:51+00:00 |
c078fcd6898e03049f986ee8e12e9e1e766d3641 | # Dataset Card for "broadclass_totalMapped4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/broadclass_totalMapped4 | [
"region:us"
]
| 2022-12-20T18:36:07+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 109458336, "num_examples": 390}, {"name": "test", "num_bytes": 27372364, "num_examples": 97}], "download_size": 137658405, "dataset_size": 136830700}} | 2022-12-22T14:17:47+00:00 |
0c7461de6866b86e95984aa61da865932affd117 |
# Overview
This file contains over 1.7m public tweets about Apple, Amazon, Google, Microsoft and Tesla stocks, published between 01/01/2015 and 31/12/2019.
| mjw/stock_market_tweets | [
"license:apache-2.0",
"region:us"
]
| 2022-12-20T18:54:22+00:00 | {"license": "apache-2.0"} | 2022-12-20T19:01:40+00:00 |
b0e3949b992633436b7bef4cbeec4f3b71485fe5 | KeithEdwardReynolds/LANDON | [
"license:openrail",
"region:us"
]
| 2022-12-20T19:45:50+00:00 | {"license": "openrail"} | 2022-12-20T20:11:44+00:00 |
|
2e6fedf42c9c104e83dfd95c3a453721e683e244 |
# Dataset Card for Mall.cz Product Reviews (Czech)
## Dataset Description
The dataset contains user reviews from Czech eshop <mall.cz>
Each review contains text, sentiment (positive/negative/neutral), and automatically-detected language (mostly Czech, occasionaly Slovak) using [lingua-py](https://github.com/pemistahl/lingua-py)
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced.
Train set has 8000 positive, 8000 neutral and 8000 negative reviews.
Validation and test set each have 1000 positive, 1000 neutral and 1000 negative reviews.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating - "pozitivní" / "neutrální" / "negativní"
- `rating_int`: integer representation of the rating (1=positive, 0=neutral, -1=negative)
- `comment_language`: language of the review (mostly "cs", occasionaly "sk")
- `comment`: the string of the review
## Dataset Source
The data is a processed adaptation of [Mall CZ corpus](https://liks.fav.zcu.cz/sentiment/).
The adaptation is label-balanced and adds automatically-detected language
| fewshot-goes-multilingual/cs_mall-product-reviews | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-3.0",
"region:us"
]
| 2022-12-20T20:35:40+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["cs"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Mall.cz Product Reviews", "tags": []} | 2022-12-20T21:11:15+00:00 |
6ced1d87a030915822b087bf539e6d5c658f1988 |
# Dataset Card for Czech Facebook comments
## Dataset Description
The dataset contains user comments from Facebook. Each comment contains text, sentiment (positive/negative/neutral).
The dataset has in total (train+validation+test) 6,600 reviews. The data is balanced.
## Dataset Features
Each sample contains:
- `comment_id`: unique string identifier of the comment.
- `sentiment_str`: string representation of the rating - "pozitivní" / "neutrální" / "negativní"
- `sentiment_int`: integer representation of the rating (1=positive, 0=neutral, -1=negative)
- `comment`: the string of the comment
## Dataset Source
The data is a processed adaptation of [Facebook CZ Corpus](https://liks.fav.zcu.cz/sentiment/).
This adaptation is label-balanced.
| fewshot-goes-multilingual/cs_facebook-comments | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-3.0",
"region:us"
]
| 2022-12-20T21:52:21+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["cs"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Czech Facebook comments", "tags": []} | 2022-12-20T21:56:09+00:00 |
949c267ff409730e7c978385c51faee878ebecf6 | # Dataset Card for "phoneme_totaldataset_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totaldataset_0 | [
"region:us"
]
| 2022-12-20T22:25:27+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163223522.0, "num_examples": 389}, {"name": "test", "num_bytes": 41231058.0, "num_examples": 98}], "download_size": 138510939, "dataset_size": 204454580.0}} | 2022-12-20T22:26:07+00:00 |
caaf0525773145ea183d7dd91418fac52fb38610 | # Dataset Card for "phoneme_totalMapped0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totalMapped0 | [
"region:us"
]
| 2022-12-20T22:26:38+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108844668, "num_examples": 389}, {"name": "test", "num_bytes": 27494376, "num_examples": 98}], "download_size": 137098876, "dataset_size": 136339044}} | 2022-12-20T22:27:13+00:00 |
37b05c5c90b898bff6419cbf12fde4abe000d2a4 | # Dataset Card for "phoneme_totaldataset_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totaldataset_1 | [
"region:us"
]
| 2022-12-20T22:27:17+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162301333.0, "num_examples": 389}, {"name": "test", "num_bytes": 40804994.0, "num_examples": 98}], "download_size": 136056009, "dataset_size": 203106327.0}} | 2022-12-20T22:27:58+00:00 |
5dfa6f0394581a2d85c391701c722846328f394f | # Dataset Card for "phoneme_totalMapped1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totalMapped1 | [
"region:us"
]
| 2022-12-20T22:28:32+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108229416, "num_examples": 389}, {"name": "test", "num_bytes": 27210864, "num_examples": 98}], "download_size": 136239720, "dataset_size": 135440280}} | 2022-12-20T22:29:20+00:00 |
33ae3da39c383ba99fa0fbca69326f9d8614b378 | # Dataset Card for "phoneme_totaldataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totaldataset_2 | [
"region:us"
]
| 2022-12-20T22:29:23+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163385611.0, "num_examples": 390}, {"name": "test", "num_bytes": 41691832.0, "num_examples": 97}], "download_size": 138543168, "dataset_size": 205077443.0}} | 2022-12-20T22:29:58+00:00 |
ef52c45cb327202f4efc8333d502d126f9722220 | # Dataset Card for "phoneme_totalMapped2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totalMapped2 | [
"region:us"
]
| 2022-12-20T22:30:27+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108952456, "num_examples": 390}, {"name": "test", "num_bytes": 27801832, "num_examples": 97}], "download_size": 137410544, "dataset_size": 136754288}} | 2022-12-20T22:31:03+00:00 |
7e730cd4009527ac068619feddfeaaf77c84f681 | # Dataset Card for "phoneme_totaldataset_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totaldataset_3 | [
"region:us"
]
| 2022-12-20T22:31:05+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 164620015.0, "num_examples": 390}, {"name": "test", "num_bytes": 40776038.0, "num_examples": 97}], "download_size": 137708673, "dataset_size": 205396053.0}} | 2022-12-20T22:31:40+00:00 |
b5600d6a4bb12bbb6eb25a264558c9a7e5cd440a | # Dataset Card for "phoneme_totalMapped3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totalMapped3 | [
"region:us"
]
| 2022-12-20T22:32:10+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 109775968, "num_examples": 390}, {"name": "test", "num_bytes": 27190896, "num_examples": 97}], "download_size": 137863961, "dataset_size": 136966864}} | 2022-12-20T22:32:44+00:00 |
69786c5181e94bd044922ef86766ff987d98ee81 | # Dataset Card for "phoneme_totaldataset_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totaldataset_4 | [
"region:us"
]
| 2022-12-20T22:32:46+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 164035246.0, "num_examples": 390}, {"name": "test", "num_bytes": 40309237.0, "num_examples": 97}], "download_size": 137553091, "dataset_size": 204344483.0}} | 2022-12-20T22:33:27+00:00 |
630f798c73ca3237402ce4a95a0044bb0237a838 | # Dataset Card for "phoneme_totalMapped4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/phoneme_totalMapped4 | [
"region:us"
]
| 2022-12-20T22:33:57+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 109385812, "num_examples": 390}, {"name": "test", "num_bytes": 26880084, "num_examples": 97}], "download_size": 137069553, "dataset_size": 136265896}} | 2022-12-20T22:34:32+00:00 |
de7f547f4e31158e08357cb378b15215cbcdc4fd | masakhane/afriqa_wiki_en_fr_100 | [
"task_categories:text-retrieval",
"multilinguality:multilingual",
"language:en",
"language:fr",
"license:apache-2.0",
"region:us"
]
| 2022-12-20T22:37:58+00:00 | {"language": ["en", "fr"], "license": "apache-2.0", "multilinguality": ["multilingual"], "task_categories": ["text-retrieval"], "pretty_name": "Wikipedia 100 word Passages", "viewer": true} | 2023-03-31T16:26:05+00:00 |
|
986af848d5b2837153e837ddd59d33c50a98eeb0 | # AutoTrain Dataset for project: feetfoot
## Dataset Description
This dataset has been automatically processed by AutoTrain for project feetfoot.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<180x320 RGB PIL image>",
"target": 0
},
{
"image": "<78x320 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['gettyimagefeet'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 97 |
| valid | 25 |
| MrDre/autotrain-data-feetfoot | [
"task_categories:image-classification",
"region:us"
]
| 2022-12-20T23:32:12+00:00 | {"task_categories": ["image-classification"]} | 2022-12-21T00:56:37+00:00 |
d533f626cc321c92175a58ee570aa3cdb87238d1 |
# Model-Written Evaluation Datasets
This repository includes datasets written by language models, used in our paper on "Discovering Language Model Behaviors with Model-Written Evaluations."
We intend the datasets to be useful to:
1. Those who are interested in understanding the quality and properties of model-generated data
2. Those who wish to use our datasets to evaluate other models for the behaviors we examined in our work (e.g., related to model persona, sycophancy, advanced AI risks, and gender bias)
The evaluations were generated to be asked to dialogue agents (e.g., a model finetuned explicitly respond to a user's utterances, or a pretrained language model prompted to behave like a dialogue agent). However, it is possible to adapt the data to test other kinds of models as well.
We describe each of our collections of datasets below:
1. `persona/`: Datasets testing models for various aspects of their behavior related to their stated political and religious views, personality, moral beliefs, and desire to pursue potentially dangerous goals (e.g., self-preservation or power-seeking).
2. `sycophancy/`: Datasets testing models for whether or not they repeat back a user's view to various questions (in philosophy, NLP research, and politics)
3. `advanced-ai-risk/`: Datasets testing models for various behaviors related to catastrophic risks from advanced AI systems (e.g., ). These datasets were generated in a few-shot manner. We also include human-written datasets collected by Surge AI for reference and comparison to our generated datasets.
4. `winogenerated/`: Our larger, model-generated version of the Winogender Dataset ([Rudinger et al., 2018](https://arxiv.org/abs/1804.09301)). We also include the names of occupation titles that we generated, to create the dataset (alongside occupation gender statistics from the Bureau of Labor Statistics)
Please see our paper for additional details on the datasets, how we generated them, human validation metrics, and other analyses of the datasets.
**Disclaimer**: As discussed in our paper, some data contains content that includes social biases and stereotypes. The data may also contain other forms of harmful or offensive content. The views expressed in the data do not reflect the views of Anthropic or any of its employees.
## Contact
For questions, please email `ethan at anthropic dot com`
## Bibtex Citation
If you would like to cite our work or data, you may use the following bibtex citation:
```
@misc{perez2022discovering,
doi = {10.48550/ARXIV.2212.09251},
url = {https://arxiv.org/abs/2212.09251},
author = {Perez, Ethan and Ringer, Sam and Lukošiūtė, Kamilė and Nguyen, Karina and Chen, Edwin and Heiner, Scott and Pettit, Craig and Olsson, Catherine and Kundu, Sandipan and Kadavath, Saurav and Jones, Andy and Chen, Anna and Mann, Ben and Israel, Brian and Seethor, Bryan and McKinnon, Cameron and Olah, Christopher and Yan, Da and Amodei, Daniela and Amodei, Dario and Drain, Dawn and Li, Dustin and Tran-Johnson, Eli and Khundadze, Guro and Kernion, Jackson and Landis, James and Kerr, Jamie and Mueller, Jared and Hyun, Jeeyoon and Landau, Joshua and Ndousse, Kamal and Goldberg, Landon and Lovitt, Liane and Lucas, Martin and Sellitto, Michael and Zhang, Miranda and Kingsland, Neerav and Elhage, Nelson and Joseph, Nicholas and Mercado, Noemí and DasSarma, Nova and Rausch, Oliver and Larson, Robin and McCandlish, Sam and Johnston, Scott and Kravec, Shauna and {El Showk}, Sheer and Lanham, Tamera and Telleen-Lawton, Timothy and Brown, Tom and Henighan, Tom and Hume, Tristan and Bai, Yuntao and Hatfield-Dodds, Zac and Clark, Jack and Bowman, Samuel R. and Askell, Amanda and Grosse, Roger and Hernandez, Danny and Ganguli, Deep and Hubinger, Evan and Schiefer, Nicholas and Kaplan, Jared},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Discovering Language Model Behaviors with Model-Written Evaluations},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| Anthropic/model-written-evals | [
"task_categories:multiple-choice",
"task_categories:zero-shot-classification",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:multiple-choice-coreference-resolution",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"gender bias",
"social bias",
"AI safety",
"personality",
"politics",
"arxiv:1804.09301",
"arxiv:2212.09251",
"region:us"
]
| 2022-12-21T00:01:13+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["multiple-choice", "zero-shot-classification", "question-answering"], "task_ids": ["multiple-choice-qa", "multiple-choice-coreference-resolution"], "pretty_name": "Evaluations from \"Discovering Language Model Behaviors with Model-Written Evaluations\"", "tags": ["gender bias", "social bias", "AI safety", "personality", "politics"]} | 2022-12-21T02:33:18+00:00 |
a62ed8b6e1ab875c90ea09b67a1842adbe7ccf49 | # AutoTrain Dataset for project: feets
## Dataset Description
This dataset has been automatically processed by AutoTrain for project feets.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<206x320 RGB PIL image>",
"target": 0
},
{
"image": "<173x320 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['gettyimagefeet'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 122 |
| valid | 122 |
| MrDre/autotrain-data-feets | [
"task_categories:image-classification",
"region:us"
]
| 2022-12-21T00:58:14+00:00 | {"task_categories": ["image-classification"]} | 2022-12-21T01:01:27+00:00 |
e5926b7fa7eee56d216a86f5dc599300c8ecf09c | ## Data for the annotation project
* `short_files.json` with short code files (around 50 lines).
* `medium_files.json` with medium sized files (around 120 lines).
* guidelines for the annotation. | loubnabnl/data_toloka | [
"language:code",
"region:us"
]
| 2022-12-21T01:33:43+00:00 | {"language": ["code"]} | 2022-12-21T01:39:36+00:00 |
b3b0eaee79340c923e972331f00b272f36e3288d | # Dataset Card for "syllable_totaldataset_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totaldataset_0 | [
"region:us"
]
| 2022-12-21T02:22:57+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162800048.0, "num_examples": 389}, {"name": "test", "num_bytes": 40702416.0, "num_examples": 98}], "download_size": 136515053, "dataset_size": 203502464.0}} | 2022-12-21T02:23:47+00:00 |
5ceff2585490e68226395469f3a1a72f0b2da10d | # Dataset Card for "syllable_totalMapped0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totalMapped0 | [
"region:us"
]
| 2022-12-21T02:24:28+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108518696, "num_examples": 389}, {"name": "test", "num_bytes": 27131260, "num_examples": 98}], "download_size": 136632106, "dataset_size": 135649956}} | 2022-12-21T02:25:07+00:00 |
ad30e09bbd73bbf8116cbb23beefc597adee2078 | # Dataset Card for "syllable_totaldataset_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totaldataset_1 | [
"region:us"
]
| 2022-12-21T02:25:09+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 165091979.0, "num_examples": 389}, {"name": "test", "num_bytes": 40724378.0, "num_examples": 98}], "download_size": 137998530, "dataset_size": 205816357.0}} | 2022-12-21T02:25:49+00:00 |
84eb4640ef92a75256ec7ce1fb345321a1f476cc | # Dataset Card for "syllable_totalMapped1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totalMapped1 | [
"region:us"
]
| 2022-12-21T02:26:22+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 110046848, "num_examples": 389}, {"name": "test", "num_bytes": 27145836, "num_examples": 98}], "download_size": 138090941, "dataset_size": 137192684}} | 2022-12-21T02:27:04+00:00 |
ee9a69b329ff595708286fb087c0862a19267566 | # Dataset Card for "syllable_totaldataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totaldataset_2 | [
"region:us"
]
| 2022-12-21T02:27:06+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163378263.0, "num_examples": 390}, {"name": "test", "num_bytes": 40462578.0, "num_examples": 97}], "download_size": 138321082, "dataset_size": 203840841.0}} | 2022-12-21T02:27:47+00:00 |
f70ac9a400adfb1d12a92a041cb80e76e275a4a0 | # Dataset Card for "syllable_totalMapped2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totalMapped2 | [
"region:us"
]
| 2022-12-21T02:28:20+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108903948, "num_examples": 390}, {"name": "test", "num_bytes": 26971340, "num_examples": 97}], "download_size": 136776788, "dataset_size": 135875288}} | 2022-12-21T02:29:05+00:00 |
429d40a1f2022f1c829580bc6e1c44e5efb32fad | # Dataset Card for "syllable_totaldataset_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totaldataset_3 | [
"region:us"
]
| 2022-12-21T02:29:08+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162920604.0, "num_examples": 390}, {"name": "test", "num_bytes": 40471857.0, "num_examples": 97}], "download_size": 137189267, "dataset_size": 203392461.0}} | 2022-12-21T02:29:50+00:00 |
0673d51722c18ef84c503418bd7cbf9c5333938c | # Dataset Card for "syllable_totalMapped3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totalMapped3 | [
"region:us"
]
| 2022-12-21T02:30:22+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108599016, "num_examples": 390}, {"name": "test", "num_bytes": 26977548, "num_examples": 97}], "download_size": 136574643, "dataset_size": 135576564}} | 2022-12-21T02:31:03+00:00 |
7307aad6edb5f41e73a9aaaa586e52cc947a851f | # Dataset Card for "syllable_totaldataset_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totaldataset_4 | [
"region:us"
]
| 2022-12-21T02:31:06+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "emotion_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 163180696.0, "num_examples": 390}, {"name": "test", "num_bytes": 41085347.0, "num_examples": 97}], "download_size": 137671411, "dataset_size": 204266043.0}} | 2022-12-21T02:31:44+00:00 |
3c17d2e8969e5398ffeb3acd868a42c8f27c1ca8 | # Dataset Card for "syllable_totalMapped4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JovialValley/syllable_totalMapped4 | [
"region:us"
]
| 2022-12-21T02:32:16+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 108772580, "num_examples": 390}, {"name": "test", "num_bytes": 27386468, "num_examples": 97}], "download_size": 137043673, "dataset_size": 136159048}} | 2022-12-21T02:32:53+00:00 |
41fd32ba6ea7a0e337ce9a1d4a4339dd576e28dc |
# Germeval Task 2017: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback
In the connected, modern world, customer feedback is a valuable source for insights on the quality of products or services. This feedback allows other customers to benefit from the experiences of others and enables businesses to react on requests, complaints or recommendations. However, the more people use a product or service, the more feedback is generated, which results in the major challenge of analyzing huge amounts of feedback in an efficient, but still meaningful way.
Thus, we propose a shared task on automatically analyzing customer reviews about “Deutsche Bahn” - the german public train operator with about two billion passengers each year.
Example:
> “RT @XXX: Da hört jemand in der Bahn so laut ‘700 Main Street’ durch seine Kopfhörer, dass ich mithören kann. :( :( :(“
As shown in the example, insights from reviews can be derived on different granularities. The review contains a general evaluation of the travel (The customer disliked the travel). Furthermore, the review evaluates a dedicated aspect of the train travel (“laut” → customer did not like the noise level).
Consequently, we frame the task as aspect-based sentiment analysis with four sub tasks:
## Data format
```
ID <tab> Text <tab> Relevance <tab> Sentiment <tab> Aspect:Polarity (whitespace separated)
```
## Links
- http://ltdata1.informatik.uni-hamburg.de/germeval2017/
- https://sites.google.com/view/germeval2017-absa/
## How to cite
```bibtex
@inproceedings{germevaltask2017,
title = {{GermEval 2017: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback}},
author = {Michael Wojatzki and Eugen Ruppert and Sarah Holschneider and Torsten Zesch and Chris Biemann},
year = {2017},
booktitle = {Proceedings of the GermEval 2017 – Shared Task on Aspect-based Sentiment in Social Media Customer Feedback},
address={Berlin, Germany},
pages={1--12}
}
``` | akash418/germeval_2017 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"language:de",
"region:us"
]
| 2022-12-21T02:58:51+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["de"], "license": [], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "topic-classification"], "pretty_name": "German Eval 2017 ", "tags": []} | 2022-12-21T03:43:47+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.