sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
828caf059ef103f1545b9e384c910a3e128bea04 |
For the sake of full disclosure I publish the dataset that I use to train [Crosstyan/BPModel](https://huggingface.co/Crosstyan/BPModel).
NSFW content is contained. Watch with your parents if you don't feel comfortable about that. | Crosstyan/BPDataset | [
"size_categories:1K<n<10K",
"license:openrail",
"not-for-all-audiences",
"region:us"
]
| 2022-12-21T03:11:52+00:00 | {"license": "openrail", "size_categories": ["1K<n<10K"], "tags": ["not-for-all-audiences"]} | 2023-12-04T18:06:36+00:00 |
77a26a5640739f8b9ca5039b599cf14c69fc3267 | Dataset homepage: https://sites.google.com/site/redwebcvpr18/ | sayakpaul/ReDWeb | [
"region:us"
]
| 2022-12-21T03:21:29+00:00 | {} | 2022-12-21T03:26:51+00:00 |
ea5f1b98cab4e1891fa483871e0bdd2ea42ce870 | # Dataset Card for "IAM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gagan3012/IAM | [
"region:us"
]
| 2022-12-21T05:12:11+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Noto_Sans_Arabic", "1": "Readex_Pro", "2": "Amiri", "3": "Noto_Kufi_Arabic", "4": "Reem_Kufi_Fun", "5": "Lateef", "6": "Changa", "7": "Kufam", "8": "ElMessiri", "9": "Reem_Kufi", "10": "Noto_Naskh_Arabic", "11": "Reem_Kufi_Ink", "12": "Tajawal", "13": "Aref_Ruqaa_Ink", "14": "Markazi_Text", "15": "IBM_Plex_Sans_Arabic", "16": "Vazirmatn", "17": "Harmattan", "18": "Gulzar", "19": "Scheherazade_New", "20": "Cairo", "21": "Amiri_Quran", "22": "Noto_Nastaliq_Urdu", "23": "Mada", "24": "Aref_Ruqaa", "25": "Almarai", "26": "Alkalami", "27": "Qahiri"}}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 563851079.0, "num_examples": 11344}], "download_size": 563727207, "dataset_size": 563851079.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-10-13T17:13:25+00:00 |
d2faabb4fd91224318c8a84a1e747976a27f365d |
# NijiJourney Prompt Pairs
#### A dataset containing txt2img prompt pairs for training on diffusion models
The final goal of this dataset is to create an OpenJourney like model but with NijiJourney images | Korakoe/NijiJourney-Prompt-Pairs | [
"license:creativeml-openrail-m",
"region:us"
]
| 2022-12-21T06:13:45+00:00 | {"license": "creativeml-openrail-m"} | 2023-03-12T05:56:02+00:00 |
4726c9cb565ab5fda7ba40fffc5f6bcd65bc0d81 | # Dataset Card for "enwiki-20221101-sections"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | justram/enwiki-20221101-sections | [
"region:us"
]
| 2022-12-21T08:02:08+00:00 | {"dataset_info": {"features": [{"name": "text_id", "dtype": "string"}, {"name": "page_url", "dtype": "string"}, {"name": "page_title", "dtype": "string"}, {"name": "section_title", "dtype": "string"}, {"name": "context_page_description", "dtype": "string"}, {"name": "context_section_description", "dtype": "string"}, {"name": "media", "sequence": "string"}, {"name": "hierachy", "sequence": "string"}, {"name": "category", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 34190161255, "num_examples": 24220847}], "download_size": 12664592565, "dataset_size": 34190161255}} | 2022-12-21T08:32:05+00:00 |
d62d01b67775498eeff5dcd7ff8dc85f63e9ef5c | # Dataset of (Du et al., 2022)
## Abstract
>Understanding causality has vital importance for various Natural Language Processing (NLP) applications. Beyond the labeled instances, conceptual explanations of the causality can provide deep understanding of the causal fact to facilitate the causal reasoning process. However, such explanation information still remains absent in existing causal reasoning resources. In this paper, we fill this gap by presenting a human-annotated explainable CAusal REasoning dataset (e-CARE), which contains over 20K causal reasoning questions, together with natural language formed explanations of the causal questions. Experimental results show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models, and the explanation information can be helpful for promoting the accuracy and stability of causal reasoning models.
## Notes
Please note that the original dataset has been modified so that the variable names match with those in the COPA dataset (Roemmele et al., 2011). In addition, only the training and the development sets are [publicly available](https://github.com/waste-wood/e-care).
## References
Du, L., Ding, X., Xiong, K., Liu, T., & Qin, B. (2022). e-CARE: a New Dataset for Exploring Explainable Causal Reasoning. arXiv preprint arXiv:2205.05849.
Roemmele, M., Bejan, C., and Gordon, A. (2011) Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, Stanford University, March 21-23, 2011. | 12ml/e-CARE | [
"task_categories:multiple-choice",
"region:us"
]
| 2022-12-21T11:38:01+00:00 | {"task_categories": ["multiple-choice"]} | 2023-01-06T18:50:03+00:00 |
96bcc6d1de05a9cd86148af56f774923e4a4280f | # Kinyarwanda-English Commonvoice dataset
A compilation of Kinyarwanda-english dataset to be used to train multi-lingual ASR
**Note:** The audio dataset shall be added in the future | mbazaNLP/common-voice-kinyarwanda-english-dataset | [
"size_categories:~ 3000 hours",
"size_categories:721398 clips",
"language:rw",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-21T12:23:30+00:00 | {"language": ["rw", "en"], "license": ["cc-by-4.0"], "size_categories": ["~ 3000 hours", "721398 clips"]} | 2022-12-21T12:40:09+00:00 |
4a3e6a934b638906125229c5b62e8ced40907e0a |
# Dataset Card for Sloleks 3
**Important**: this is a mostly complete script for processing Sloleks 3. Certain data properties may not be exposed through the script.
Please see the [CLARIN repository](https://www.clarin.si/repository/xmlui/handle/11356/1745) for full details on what the dataset contains, and open an issue or a pull request if you require some other information from the raw data.
### Dataset Summary
Sloleks is a reference morphological lexicon of Slovene that was developed to be used in various NLP applications and language manuals.
It contains Slovene lemmas, their inflected or derivative word forms and the corresponding grammatical description.
In addition to the approx. 100,000 entries already available in [Sloleks 2.0](http://hdl.handle.net/11356/1230), Sloleks 3.0 contains an additional
cca. 265,000 newly generated entries from the most frequent lemmas in [Gigafida 2.0](http://hdl.handle.net/11356/1320) not yet included in previous versions of Sloleks.
For verbs, adjectives, adverbs, and common nouns, the lemmas were checked manually by three annotators and included in Sloleks only if confirmed as legitimate by at
least one annotator. No manual checking was performed on proper nouns. Lemmatization rules, part-of-speech categorization and the set of feature-value pairs follow the
[MULTEXT-East morphosyntactic specifications for Slovenian](https://nl.ijs.si/ME/V6/msd/html/msd-sl.html).
### Supported Tasks and Leaderboards
Other (the data is a knowledge base - lexicon).
### Languages
Slovenian.
## Dataset Structure
### Data Instances
Entry for the verb `absorbirati` (English: *to absorb*):
```
{
'headword_lemma': 'absorbirati',
'pos': 'verb',
'lex_unit': {'id': 'LE_a293f9ab871299f116dff2cc1421367a', 'form': 'absorbirati', 'key': 'G_absorbirati', 'type': 'single'},
'word_forms':
[
{'forms': ['absorbirati'], 'msd': 'Ggvn'},
{'forms': ['absorbirat'], 'msd': 'Ggvm'},
{'forms': ['absorbiral'], 'msd': 'Ggvd-em'},
{'forms': ['absorbirala'], 'msd': 'Ggvd-dm'},
{'forms': ['absorbirali'], 'msd': 'Ggvd-mm'},
{'forms': ['absorbirala'], 'msd': 'Ggvd-ez'},
{'forms': ['absorbirali'], 'msd': 'Ggvd-dz'},
{'forms': ['absorbirale'], 'msd': 'Ggvd-mz'},
{'forms': ['absorbiralo'], 'msd': 'Ggvd-es'},
{'forms': ['absorbirali'], 'msd': 'Ggvd-ds'},
{'forms': ['absorbirala'], 'msd': 'Ggvd-ms'},
{'forms': ['absorbiram'], 'msd': 'Ggvspe'},
{'forms': ['absorbiraš'], 'msd': 'Ggvsde'},
{'forms': ['absorbira'], 'msd': 'Ggvste'},
{'forms': ['absorbirava'], 'msd': 'Ggvspd'},
{'forms': ['absorbirata'], 'msd': 'Ggvsdd'},
{'forms': ['absorbirata'], 'msd': 'Ggvstd'},
{'forms': ['absorbiramo'], 'msd': 'Ggvspm'},
{'forms': ['absorbirate'], 'msd': 'Ggvsdm'},
{'forms': ['absorbirajo'], 'msd': 'Ggvstm'},
{'forms': ['absorbirajva'], 'msd': 'Ggvvpd'},
{'forms': ['absorbirajmo'], 'msd': 'Ggvvpm'},
{'forms': ['absorbiraj'], 'msd': 'Ggvvde'},
{'forms': ['absorbirajta'], 'msd': 'Ggvvdd'},
{'forms': ['absorbirajte'], 'msd': 'Ggvvdm'}
],
'is_manually_checked': True
}
```
### Data Fields
- `headword_lemma`: lemma of the headword;
- `pos`: coarse-grained part-of-speech tag (one of `{"noun", "verb", "adjective", "adverb", "pronoun", "numeral", "preposition", "conjunction", "particle", "interjection", "abbreviation", "residual"}`);
- `lex_unit`: properties of the lexical unit corresponding to the headword (`id`, `form`, `key` and `type`);
- `word_forms`: forms of the headword, each with its own list of possible forms and the morphosyntactic description of the form;
- `is_manually_checked`: whether the headword was manually validated or not.
## Additional Information
### Dataset Curators
Jaka Čibej; et al. (please see http://hdl.handle.net/11356/1745 for the full list).
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{sloleks3,
title = {Morphological lexicon Sloleks 3.0},
author = {{\v C}ibej, Jaka and Gantar, Kaja and Dobrovoljc, Kaja and Krek, Simon and Holozan, Peter and Erjavec, Toma{\v z} and Romih, Miro and Arhar Holdt, {\v S}pela and Krsnik, Luka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1745},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| cjvt/sloleks | [
"license:cc-by-sa-4.0",
"region:us"
]
| 2022-12-21T13:33:13+00:00 | {"license": "cc-by-sa-4.0"} | 2024-02-11T15:11:17+00:00 |
6186f97886d1767b77ad2783b4575ad898ceeb7d | ---
annotations_creators:
- machine-generated
language:
- ru
language_creators:
- machine-generated
license:
- afl-3.0
multilinguality: []
pretty_name: Dmitriy007/restor_punct_Lenta2
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- token-classification
task_ids: []
# Dataset Card for Dmitriy007/restor_punct_Lenta2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Набор данных restor_punct_Lenta2 (версия 2.0) представляет собой набор из 800 975 блоков русскоязычных предложений, разбитых на слова, каждое слово размечено маркером для последующей классификации токенов.
Набор данных очищен от символов: '...', ',', '«', '»', '\\', '-', '"'
Виды маркеров: L L. L! L? B B. B! N N. No
Примеры значений маркеров:
L -- данное слово с маленькой буквы + пробел
L. -- данное слово с маленькой буквы + тчк
B -- данное слово с заглавной буквы
B. -- данное слово с заглавной буквы + тчк
N -- Число + пробел
N. -- Число + тчк
No -- Символ не определён
### Supported Tasks and Leaderboards
token-classification: набор данных можно использовать для обучения модели восстановления пунктуации и заглавных букв.
### Languages
Текст на русском языке
## Dataset Structure
### Data Instances
Пример из набора поездов restor_punct_Lenta2 выглядит следующим образом:
{'words': ['фотограф-корреспондент', 'daily', 'mirror', 'рассказывает', 'случай', 'который', 'порадует', 'всех', 'друзей', 'животных'], 'labels': ['B', 'B', 'B', 'L', 'L', 'L', 'L', 'L', 'L', 'L.'], 'labels_id': [4, 4, 4, 0, 0, 0, 0, 0, 0, 1]}
### Data Fields
• 'words': список слов, содержащая текст разбитый на отдельные слова.
• 'labels': строка, список маркеров
• 'labels_id: целое число, от 0 до 9 , обозначающее порядковый номер маркера
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
Набор данных restor_punct_Lenta2 был разработан для обучения модели восстановления пунктуации и
аглавных букв в тексте предложения. Предполагалась, что обученная таким образом модель, будет использоваться в задачи транскрибации.
### Source Data
#### Initial Data Collection and Normalization
Данных restor_punct_Lenta2 был основан на наборе данных Lenta2 проекта CORUS.
#### Who are the source language producers?
[More Information Needed]
### Annotations
Набор данных не содержит никаких дополнительных аннотаций.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Имена пользователей или личная информация рецензентов не собирались вместе с обзорами, но потенциально могут быть восстановлены.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| Dmitriy007/restor_punct_Lenta2 | [
"region:us"
]
| 2022-12-21T14:07:45+00:00 | {} | 2023-01-19T13:02:56+00:00 |
d63755480740dbf590917e4a90d81bf2596eb676 |
A RL environment called BallChase for the Godot Game Engine.
This environment was created with: https://github.com/edbeeching/godot_rl_agents
## Downloading the environment
After installing Godot RL Agents, download the environment with:
```
gdrl.env_from_hub -r edbeeching/godot_rl_BallChase
```
| edbeeching/godot_rl_BallChase | [
"deep-reinforcement-learning",
"reinforcement-learning",
"godot-rl",
"environments",
"video-games",
"region:us"
]
| 2022-12-21T14:29:19+00:00 | {"library_name": "godot-rl", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "godot-rl", "environments", "video-games"]} | 2024-01-07T09:46:50+00:00 |
7631ff2d592aa0d63d8c817846f321cc79a77f29 |
A RL environment called FlyBy for the Godot Game Engine.
This environment was created with: https://github.com/edbeeching/godot_rl_agents
## Downloading the environment
After installing Godot RL Agents, download the environment with:
```
gdrl.env_from_hub -r edbeeching/godot_rl_FlyBy
```
| edbeeching/godot_rl_FlyBy | [
"deep-reinforcement-learning",
"reinforcement-learning",
"godot-rl",
"environments",
"video-games",
"region:us"
]
| 2022-12-21T14:29:50+00:00 | {"library_name": "godot-rl", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "godot-rl", "environments", "video-games"]} | 2024-01-07T09:47:07+00:00 |
69f13fe15b356a53128ff4628e5d31821cef1225 |
A RL environment called FPS for the Godot Game Engine.
This environment was created with: https://github.com/edbeeching/godot_rl_agents
## Downloading the environment
After installing Godot RL Agents, download the environment with:
```
gdrl.env_from_hub -r edbeeching/godot_rl_FPS
```
| edbeeching/godot_rl_FPS | [
"deep-reinforcement-learning",
"reinforcement-learning",
"godot-rl",
"environments",
"video-games",
"region:us"
]
| 2022-12-21T14:30:31+00:00 | {"library_name": "godot-rl", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "godot-rl", "environments", "video-games"]} | 2024-01-07T09:47:23+00:00 |
689f721fd87c5c1152e028cbc2e053c52f162a1e |
A RL environment called JumperHard for the Godot Game Engine.
This environment was created with: https://github.com/edbeeching/godot_rl_agents
## Downloading the environment
After installing Godot RL Agents, download the environment with:
```
gdrl.env_from_hub -r edbeeching/godot_rl_JumperHard
```
| edbeeching/godot_rl_JumperHard | [
"deep-reinforcement-learning",
"reinforcement-learning",
"godot-rl",
"environments",
"video-games",
"region:us"
]
| 2022-12-21T14:31:50+00:00 | {"library_name": "godot-rl", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "godot-rl", "environments", "video-games"]} | 2024-01-07T09:48:07+00:00 |
2b780bfdd101a53e52109c1d27a0e38cb137fbc6 |
A RL environment called Racer for the Godot Game Engine.
This environment was created with: https://github.com/edbeeching/godot_rl_agents
## Downloading the environment
After installing Godot RL Agents, download the environment with:
```
gdrl.env_from_hub -r edbeeching/godot_rl_Racer
```
| edbeeching/godot_rl_Racer | [
"deep-reinforcement-learning",
"reinforcement-learning",
"godot-rl",
"environments",
"video-games",
"region:us"
]
| 2022-12-21T14:32:20+00:00 | {"library_name": "godot-rl", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "godot-rl", "environments", "video-games"]} | 2024-01-07T09:48:27+00:00 |
0a35afa9268df9dae11b8fe5850817cb7cb5db66 |
A RL environment called VirtualCamera for the Godot Game Engine.
This environment was created with: https://github.com/edbeeching/godot_rl_agents
## Downloading the environment
After installing Godot RL Agents, download the environment with:
```
gdrl.env_from_hub -r edbeeching/godot_rl_VirtualCamera
```
| edbeeching/godot_rl_VirtualCamera | [
"deep-reinforcement-learning",
"reinforcement-learning",
"godot-rl",
"environments",
"video-games",
"region:us"
]
| 2022-12-21T14:33:30+00:00 | {"library_name": "godot-rl", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "godot-rl", "environments", "video-games"]} | 2024-01-07T09:48:56+00:00 |
f35c91ff04c01d598e930dd7a6bd0b857ca3a53f | # Dataset Card for "lego-blip-captions-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Norod78/lego-blip-captions-512 | [
"region:us"
]
| 2022-12-21T14:42:12+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 627030265.0, "num_examples": 2511}], "download_size": 625119749, "dataset_size": 627030265.0}} | 2022-12-21T14:43:40+00:00 |
6360e9d7bf1b5673955ef4296e338e6ce68fbe2a | # Dataset Card for "sinograms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AshrafAlAodat/sinograms | [
"region:us"
]
| 2022-12-21T14:55:54+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49082809.0, "num_examples": 1400}], "download_size": 48978515, "dataset_size": 49082809.0}} | 2022-12-25T16:06:34+00:00 |
265e9b8e307d3a10a00409320e9afbcfb10aa2c6 |
# Dataset Card for NPSC Bokmål (< 15 sec. segments)
## Dataset Description
- **Homepage:**
- **Repository:** <https://github.com/scribe-project/nodalida_2023_combined_training>
- **Paper:**
```
@inproceedings{
solberg2023improving,
title={Improving Generalization of Norwegian {ASR} with Limited Linguistic Resources},
author={Per Erik Solberg and Pablo Ortiz and Phoebe Parsons and Torbj{\o}rn Svendsen and Giampiero Salvi},
booktitle={The 24rd Nordic Conference on Computational Linguistics},
year={2023}
}
```
- **Point of Contact:** [Per Erik Solberg](mailto:[email protected])
### Dataset Summary
This is the version of the Bokmål part of the Norwegian Parliamentary Speech Corpus (NPSC) used for training and testing the STORTINGET model
in the paper *Improving Generalization of Norwegian ASR with Limited Linguistic Resources* presented at NoDaLiDa 2023.
It only contains segments of a length < 15 sec. For a full version of the NPSC, see [this repository](https://huggingface.co/datasets/NbAiLab/NPSC).
### Languages
Norwegian Bokmål
## Dataset Creation
### Source Data
The full version of this dataset is found in [the repository of the Norwegian Language Bank](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/)
#### Initial Data Collection and Normalization
The data was retrieved using the [Spraakbanken downloader](https://pypi.org/project/spraakbanken-downloader/) and standardized
using the [combined dataset standardization scripts](https://github.com/scribe-project/asr-standardized-combined). Bokmål segments with a duration < 15 seconds were
extracted using [this code](https://github.com/scribe-project/nodalida_2023_combined_training/blob/main/make_datasets/make_npsc_csvs.ipynb).
## Licensing Information
[CC0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{
solberg2023improving,
title={Improving Generalization of Norwegian {ASR} with Limited Linguistic Resources},
author={Per Erik Solberg and Pablo Ortiz and Phoebe Parsons and Torbj{\o}rn Svendsen and Giampiero Salvi},
booktitle={The 24rd Nordic Conference on Computational Linguistics},
year={2023}
}
``` | scribe-project/npsc_nb | [
"region:us"
]
| 2022-12-21T15:21:00+00:00 | {"dataset_info": {"features": [{"name": "speaker_id", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "utterance_id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "raw_text", "dtype": "string"}, {"name": "full_audio_file", "dtype": "string"}, {"name": "original_data_split", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "duration", "dtype": "float64"}, {"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "utterance_audio_file", "dtype": "audio"}, {"name": "standardized_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8190809957.84, "num_examples": 40008}, {"name": "test", "num_bytes": 1026553338.856, "num_examples": 5044}, {"name": "validation", "num_bytes": 1097030649.769, "num_examples": 5461}], "download_size": 10261847599, "dataset_size": 10314393946.465}} | 2023-04-25T09:23:19+00:00 |
f2161bd1af8282517d3a856af893a1c6da2ead31 | Nlpeva/Calico_fluffy | [
"license:cc0-1.0",
"region:us"
]
| 2022-12-21T16:51:05+00:00 | {"license": "cc0-1.0"} | 2022-12-21T16:52:27+00:00 |
|
e9f7bab3230e9ae4ba558cff99a2da2cca7b0455 | KeithEdwardReynolds/Landon_McCarter | [
"license:openrail",
"region:us"
]
| 2022-12-21T17:28:49+00:00 | {"license": "openrail"} | 2022-12-21T17:28:49+00:00 |
|
7aa435278248eabec1a5e13229f84e13a149c681 | solidol567/XsnTa | [
"license:openrail",
"region:us"
]
| 2022-12-21T17:45:11+00:00 | {"license": "openrail"} | 2022-12-21T18:23:29+00:00 |
|
8751f30ff7a107a39477e0b6a074eebcc20d0c08 | # Dataset Card for "test_tags"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Test
Test123
data | mariosasko/test_tags | [
"region:us"
]
| 2022-12-21T17:47:30+00:00 | {"dataset_info": {"features": [{"name": "a", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 32, "num_examples": 4}], "download_size": 584, "dataset_size": 32}} | 2023-05-03T15:05:57+00:00 |
75ce94759ca51ad5abcde57facd0deea88ae4f4d | CSAle/galaxy_images | [
"license:cc-by-3.0",
"region:us"
]
| 2022-12-21T19:35:53+00:00 | {"license": "cc-by-3.0"} | 2022-12-21T19:35:54+00:00 |
|
719c91226a79f5f9a8984145f15f29626eabc29a |
# CORAA-v1.1
CORAA is a publicly available dataset for Automatic Speech Recognition (ASR) in the Brazilian Portuguese language containing 290.77 hours of audios and their respective transcriptions (400k+ segmented audios). The dataset is composed of audios of 5 original projects:
- ALIP (Gonçalves, 2019)
- C-ORAL Brazil (Raso and Mello, 2012)
- NURC-Recife (Oliviera Jr., 2016)
- SP-2010 (Mendes and Oushiro, 2012)
- TEDx talks (talks in Portuguese)
The audios were either validated by annotators or transcripted for the first time aiming at the ASR task.
## Metadata
- file_path: the path to an audio file
- task: transcription (annotators revised original transcriptions); annotation (annotators classified the audio-transcription pair according to votes_for_* metrics); annotation_and_transcription (both tasks were performed)
- variety: European Portuguese (PT_PT) or Brazilian Portuguese (PT_BR)
- dataset: one of five datasets (ALIP, C-oral Brasil, NURC-RE, SP2010, TEDx Portuguese)
- accent: one of four accents (Minas Gerais, Recife, Sao Paulo cities, Sao Paulo capital) or the value "miscellaneous"
- speech_genre: Interviews, Dialogues, Monologues, Conversations, Interviews, Conference, Class Talks, Stage Talks or Reading
- speech_style: Spontaneous Speech or Prepared Speech or Read Speech
- up_votes: for annotation, the number of votes to valid the audio (most audios were revewed by one annotor, but some of the audios were analyzed by more than one).
- down_votes: for annotation, the number of votes do invalid the audio (always smaller than up_votes)
- votes_for_hesitation: for annotation, votes categorizing the audio as having the hesitation phenomenon
- votes_for_filled_pause: for annotation, votes categorizing the audio as having the filled pause phenomenon
- votes_for_noise_or_low_voice: for annotation, votes categorizing the audio as either having noise or low voice, without impairing the audio compression.
- votes_for_second_voice: for annotation, votes categorizing the audio as having a second voice, without impairing the audio compression
- votes_for_no_identified_problem: without impairing the audio as having no identified phenomenon (of the four described above)
- text: the transcription for the audio
## Downloads :
Dataset:
| Gdrive | Internal | Hugging Face |
|-----------|--------------------|-----------|
| [Train audios](https://drive.google.com/file/d/1deCciFD35EA_OEUl0MrEDa7u5O2KgVJM/view?usp=sharing) | [Train audios](http://143.107.183.175:14888/static/coraa/train.zip)| [Train audios](https://huggingface.co/datasets/gabrielrstan/CORAA-v1.1/tree/main/train_dividido) | |
| [Train transcriptions and metadata](https://drive.google.com/file/d/1HbwahfMWoArYj0z2PfI4dHiambWfaNWg/view?usp=sharing) | [Train transcriptions and metadata](http://143.107.183.175:14880/metadata_train_final.csv)| [Train transcriptions and metadata](https://huggingface.co/datasets/gabrielrstan/CORAA-v1.1/blob/main/metadata_train_final.csv)|
|[Dev audios](https://drive.google.com/file/d/1D1ft4F37zLjmGxQyhfkdjSs9cJzOL3nI/view?usp=sharing) |[Dev audios](http://143.107.183.175:14880/dev.zip) |[Dev audios](https://huggingface.co/datasets/gabrielrstan/CORAA-v1.1/blob/main/dev.zip) |
| [Dev transcriptions and metadata](https://drive.google.com/file/d/185erjax7lS_YNuolZvcMt_EdprafyMU0/view?usp=sharing) | [Dev transcriptions and metadata](http://143.107.183.175:14880/metadata_dev_final.csv) | [Dev transcriptions and metadata](https://huggingface.co/datasets/gabrielrstan/CORAA-v1.1/blob/main/metadata_dev_final.csv) |
| [Test audios](https://drive.google.com/file/d/1vHH5oVo4zeJKchIyHHHjzvKD3QXuJxHZ/view?usp=sharing) | [Test audios](http://143.107.183.175:14880/test.zip) | [Test audios](https://huggingface.co/datasets/gabrielrstan/CORAA-v1.1/blob/main/test.zip) |
| [Test transcriptions and metadata](https://drive.google.com/file/d/1hcNoA7-xOEn5s0iYjX6BebaEsx_7LfCd/view?usp=sharing) | [Test transcriptions and metadata](http://143.107.183.175:14880/metadata_test_final.csv) | [Test transcriptions and metadata](https://huggingface.co/datasets/gabrielrstan/CORAA-v1.1/blob/main/metadata_test_final.csv) |
Experiments:
- [Checkpoints ](https://drive.google.com/drive/folders/10JkbCzYypZtCz1nHY5rBoBM1r66P3p3j?usp=sharing)
- [Code](https://github.com/Edresson/Wav2Vec-Wrapper)
Model trained in this corpus: Wav2Vec 2.0 XLSR-53 (multilingual pretraining)
## Citation
- [Preprint](https://arxiv.org/abs/2110.15731):
```
@misc{c2021coraa,
title={CORAA: a large corpus of spontaneous and prepared speech manually validated for speech recognition in Brazilian Portuguese},
author={Arnaldo Candido Junior and Edresson Casanova and Anderson Soares and Frederico Santos de Oliveira and Lucas Oliveira and Ricardo Corso Fernandes Junior and Daniel Peixoto Pinto da Silva and Fernando Gorgulho Fayet and Bruno Baldissera Carlotto and Lucas Rafael Stefanel Gris and Sandra Maria Aluísio},
year={2021},
eprint={2110.15731},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
- Full Paper: coming soon
- Oficial site: [Tarsila Project](https://sites.google.com/view/tarsila-c4ai/)
## Partners / Sponsors / Funding
- [C4AI](https://c4ai.inova.usp.br/pt/home-2/)
- [CEIA](https://centrodeia.org/)
- [UFG](https://www.ufg.br/)
- [USP](https://www5.usp.br/)
- [UTFPR](http://www.utfpr.edu.br/)
## References
- Gonçalves SCL (2019) Projeto ALIP (amostra linguística do interior paulista) e banco de dados iboruna: 10 anos de contribuição com a descrição do Português Brasileiro. Revista Estudos Linguísticos 48(1):276–297.
- Raso T, Mello H, Mittmann MM (2012) The C-ORAL-BRASIL I: Reference corpus for spoken Brazilian Portuguese. In: Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), European Language Resources Association (ELRA), Istanbul, Turkey, pp 106–113, URL http://www.lrec-conf.org/proceedings/lrec2012/pdf/624_Paper.pdf
- Oliviera Jr M (2016) Nurc digital um protocolo para a digitalização, anotação, arquivamento e disseminação do material do projeto da norma urbana linguística culta (NURC). CHIMERA: Revista de Corpus de Lenguas Romances y Estudios Linguísticos 3(2):149–174, URL https://revistas.uam.es/chimera/article/view/6519
- Mendes RB, Oushiro L (2012) Mapping Paulistano Portuguese: the SP2010 Project. In: Proceedings of the VIIth GSCP International Conference: Speech and Corpora, Fizenze University Press, Firenze, Italy, pp 459–463.
| gabrielrstan/CORAA-v1.1 | [
"license:unknown",
"arxiv:2110.15731",
"region:us"
]
| 2022-12-21T19:37:05+00:00 | {"license": "unknown"} | 2022-12-28T23:15:17+00:00 |
71b6d04a64f75e4df0a03a58157426ce6d77a817 | # Dataset Card for "unnatural-instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mrm8488/unnatural-instructions | [
"region:us"
]
| 2022-12-21T20:56:20+00:00 | {"dataset_info": [{"config_name": "default", "features": [{"name": "instruction", "dtype": "string"}, {"name": "instances", "list": [{"name": "instruction_with_input", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "constraints", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 54668900, "num_examples": 66010}], "download_size": 28584196, "dataset_size": 54668900}, {"config_name": "core", "features": [{"name": "instruction", "dtype": "string"}, {"name": "instances", "sequence": [{"name": "instruction_with_input", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "constraints", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 55461020, "num_examples": 66010}], "download_size": 29679516, "dataset_size": 55461020}, {"config_name": "full", "features": [{"name": "instruction", "dtype": "string"}, {"name": "instances", "sequence": [{"name": "instruction_with_input", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "constraints", "dtype": "string"}]}, {"name": "reformulations", "sequence": [{"name": "instruction", "dtype": "string"}, {"name": "instruction_with_input", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 145864853, "num_examples": 66010}], "download_size": 29679516, "dataset_size": 145864853}]} | 2022-12-23T18:09:15+00:00 |
7791769bbc6675c142b5f81cc8e2de0ce44de7ab | # Dataset Card for Unnatural Instructions (Core data)
This info comes from the **Unnatural Instructions GitHub [repo](https://github.com/orhonovich/unnatural-instructions/)**.
Unnatural Instructions is a dataset of instructions automatically generated by a Large Language model.
See full details in the paper: "[Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor](https://arxiv.org/abs/2212.09689)"
## 🗃️ Content
The Unnatural Instructions core dataset of 68,478 instruction-input-output triplets.
## 📄 Format
### Core data
Each example contains:
- `input`: An input for the task described by the `instruction`
- `instruction_with_input`: The instruction concatenated with the `input`
- `constraints`: The task's output space constraints
- `output`: The output of executing `instruction` with the given `input`
## 📘 Citation
If you make use of Unnatural Instructions, please cite the following paper:
```
@misc{honovich2022unnatural,
title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor},
author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo},
url = {https://arxiv.org/abs/2212.09689},
publisher = {arXiv},
year={2022}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mrm8488/unnatural-instructions-core | [
"arxiv:2212.09689",
"region:us"
]
| 2022-12-21T20:57:50+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "instances", "list": [{"name": "instruction_with_input", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "constraints", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 54668900, "num_examples": 66010}], "download_size": 28584196, "dataset_size": 54668900}} | 2022-12-21T21:42:06+00:00 |
e55025405febbc31033ac51b86eb1c36667c979f | # Dataset Card for Unnatural Instructions (Full data)
This info comes from the **Unnatural Instructions GitHub [repo](https://github.com/orhonovich/unnatural-instructions/)**.
Unnatural Instructions is a dataset of instructions automatically generated by a Large Language model.
See full details in the paper: "[Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor](https://arxiv.org/abs/2212.09689)"
## 🗃️ Content
It contains the full 240,670 Unnatural Instructions (instruction-input-output triplets) examples. It was constructed by expanding the core data with automatically generated instruction paraphrases.
## 📄 Format
### Full data
It has the same structure as [Core Data](https://huggingface.co/datasets/mrm8488/unnatural-instructions-core), but with one additional field - `reformulations`. `reformulations` is an array of JSON objects, each corresponds to an automatically generated paraphrase for the given instruction. Each reformulation contains the fields:
- `instruction`: A paraphrase of the original instruction
- `input`: An input for the task described by the `instruction`
- `instruction_with_input`: The paraphrased instruction concatenated with the `input`
- `output`: The output of executing `instruction` with the given `input`
## 📘 Citation
If you make use of Unnatural Instructions, please cite the following paper:
```
@misc{honovich2022unnatural,
title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor},
author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo},
url = {https://arxiv.org/abs/2212.09689},
publisher = {arXiv},
year={2022}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mrm8488/unnatural-instructions-full | [
"arxiv:2212.09689",
"region:us"
]
| 2022-12-21T20:59:04+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "instances", "list": [{"name": "instruction_with_input", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "constraints", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "reformulations", "list": [{"name": "instruction", "dtype": "string"}, {"name": "instruction_with_input", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 144282712, "num_examples": 66010}], "download_size": 57715606, "dataset_size": 144282712}} | 2022-12-21T21:41:31+00:00 |
5379ba66d6b869d8ba74c8341f3100d645246d5d | # Dataset Card for "ljspeech_phonemes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bookbot/ljspeech_phonemes | [
"region:us"
]
| 2022-12-21T23:18:09+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 22050}}}, {"name": "file", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "normalized_text", "dtype": "string"}, {"name": "phonemes", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3863152206.0, "num_examples": 13100}], "download_size": 3787337731, "dataset_size": 3863152206.0}} | 2022-12-21T23:24:29+00:00 |
071c5c771d4c540a53ca4197921315ec5b400ae5 | # Dataset Card for "preprocessed_jsut_jsss_css10_fleurs_common_voice_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vumichien/preprocessed_jsut_jsss_css10_fleurs_common_voice_11 | [
"region:us"
]
| 2022-12-22T00:00:05+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12359542831, "num_examples": 31708}, {"name": "test", "num_bytes": 1562198132, "num_examples": 4604}], "download_size": 13916026126, "dataset_size": 13921740963}} | 2022-12-22T00:07:28+00:00 |
39882674ccfc800c95e149aa00ce5e69f13fedff | # Dataset Card for "wikipedia-ptbr-20221220"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dominguesm/wikipedia-ptbr-20221220 | [
"region:us"
]
| 2022-12-22T00:07:45+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2367117753.3, "num_examples": 987399}, {"name": "test", "num_bytes": 131507740.51323204, "num_examples": 54856}, {"name": "valid", "num_bytes": 131505343.18676797, "num_examples": 54855}], "download_size": 1592202665, "dataset_size": 2630130837.0000005}} | 2022-12-22T10:49:09+00:00 |
098765c79ea10a2cb19c828324e33281b8336ec0 | # Dataset Card for PopQA
## Dataset Summary
PopQA is a large-scale open-domain question answering (QA) dataset, consisting of 14k entity-centric QA pairs. Each question is created by converting a knowledge tuple retrieved from Wikidata using a template. Each question come with the original `subject_entitiey`, `object_entity`and `relationship_type` annotation, as well as Wikipedia monthly page views.
## Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
- Size of downloaded dataset file: 5.2 MB
## Data Fields
- `id`: question id
- `subj`: subject entity name
- `prop`: relationship type
- `obj`: object entity name
- `subj_id`: Wikidata ID of the subject entity
- `prop_id`: Wikidata relationship type ID
- `obj_id`: Wikidata ID of the object entity
- `s_aliases`: aliases of the subject entity
- `o_aliases`: aliases of the object entity
- `s_uri`: Wikidata URI of the subject entity
- `o_uri`: Wikidata URI of the object entity
- `s_wiki_title`: Wikipedia page title of the subject entity
- `o_wiki_title`: Wikipedia page title of the object entity
- `s_pop`: Wikipedia monthly pageview of the subject entity
- `o_pop`: Wikipedia monthly pageview of the object entity
- `question`: PopQA question
- `possible_answers`: a list of the gold answers.
## Citation Information
```
@article{ mallen2023llm_memorization ,
title={When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories },
author={ Mallen, Alex and Asai,Akari and Zhong, Victor and Das, Rajarshi and Hajishirzi, Hannaneh and Khashabi, Daniel},
journal={ arXiv preprint },
year={ 2022 }
}
```
| akariasai/PopQA | [
"region:us"
]
| 2022-12-22T00:37:19+00:00 | {} | 2022-12-22T01:01:20+00:00 |
24ebc073e65279f0d2a8cd785ec5f9283a1ab7fa | # Dataset Card for "dataset-v-1.4_CLIP_identities_random_seeds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SDbiaseval/dataset-v-1.4_CLIP_identities_random_seeds | [
"region:us"
]
| 2022-12-22T00:44:47+00:00 | {"dataset_info": {"features": [{"name": "adjective", "dtype": "string"}, {"name": "profession", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "gender", "dtype": "string"}, {"name": "identity", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1172792739.5, "num_examples": 31500}], "download_size": 1167658244, "dataset_size": 1172792739.5}} | 2022-12-22T00:46:00+00:00 |
519a5e8ba1c5657282a90bb81a7bf4a10f200742 | # Dataset Card for "rick-and-morty-s06e01-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juliaturc/rick-and-morty-s06e01-blip-captions | [
"region:us"
]
| 2022-12-22T01:05:00+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78803729.742, "num_examples": 1341}], "download_size": 78105717, "dataset_size": 78803729.742}} | 2022-12-22T01:05:09+00:00 |
48df7abf0f64f9279b4ee04386272eb9dc89ef89 |
## Dataset Description
- **Repository:** https://github.com/shuyanzhou/docprompting
- **Paper:** [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/pdf/2207.05987.pdf)
### Dataset Summary
This is the re-split of [CoNaLa](https://conala-corpus.github.io/) dataset.
For each code snippet in the dev and test set, at least one function is held out from the training set.
This split aims at testing a code generation model's capacity in generating *unseen* functions
We further make sure that examples from the same StackOverflow post (same `question_id` before `-`) are in the same split.
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Python code.
## Dataset Structure
```python
dataset = load_dataset("neulab/docpromting-conala")
DatasetDict({
train: Dataset({
features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'],
num_rows: 2135
})
test: Dataset({
features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'],
num_rows: 543
})
validation: Dataset({
features: ['nl', 'cmd', 'question_id', 'cmd_name', 'oracle_man', 'canonical_cmd'],
num_rows: 201
})
})
})
code_docs = load_dataset("neulab/docprompting-conala", "docs")
DatasetDict({
train: Dataset({
features: ['doc_id', 'doc_content'],
num_rows: 34003
})
})
```
### Data Fields
train/dev/test:
- nl: The natural language intent
- cmd: The reference code snippet
- question_id: `x-y`where `x` is the StackOverflow post ID
- oracle_man: The `doc_id` of the functions used in the reference code snippet. The corresponding contents are in `doc` split
- canonical_cmd: The canonical version reference code snippet
docs:
- doc_id: the id of a doc
- doc_content: the content of the doc
## Dataset Creation
The dataset was crawled from Stack Overflow, automatically filtered, then curated by annotators. For more details, please refer to the original [paper](https://arxiv.org/pdf/1805.08949.pdf)
### Citation Information
```
@article{zhou2022doccoder,
title={DocCoder: Generating Code by Retrieving and Reading Docs},
author={Zhou, Shuyan and Alon, Uri and Xu, Frank F and JIang, Zhengbao and Neubig, Graham},
journal={arXiv preprint arXiv:2207.05987},
year={2022}
}
``` | neulab/docprompting-conala | [
"task_categories:text2text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:mit",
"code-generation",
"doc retrieval",
"retrieval augmented generation",
"arxiv:2207.05987",
"arxiv:1805.08949",
"region:us"
]
| 2022-12-22T02:40:47+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "DocPrompting-CoNaLa", "tags": ["code-generation", "doc retrieval", "retrieval augmented generation"]} | 2023-03-14T17:59:47+00:00 |
1c75706ec89ed05fd07382a50cfed9b40847a21f | Sampled Data from AIforBharat corpora | aashay96/indic-gpt | [
"region:us"
]
| 2022-12-22T06:55:12+00:00 | {} | 2023-04-21T19:45:09+00:00 |
2a397393d74975c43e0b64ff466fa839d1347eb8 | # Cleaned russian traffic sign images dataset
Dataset is generated from [Russian traffic sign images dataset](https://www.kaggle.com/datasets/watchman/rtsd-dataset) and [detected signs in the dataset](https://graphics.cs.msu.ru/projects/traffic-sign-recognition.html). | eleldar/rtsd_cleaned | [
"region:us"
]
| 2022-12-22T07:09:31+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "sign_class", "dtype": "string"}, {"name": "sign_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": -515611439.904, "num_examples": 104358}], "download_size": 58343345, "dataset_size": -515611439.904}} | 2022-12-22T13:09:31+00:00 |
0a13a2df35f7e5c05ae683561b3a867e12c80b2d | innocent-charles/innocentconcept | [
"region:us"
]
| 2022-12-22T08:27:55+00:00 | {} | 2022-12-22T11:39:23+00:00 |
|
6535098146bcd8975a3ecf12d2a24368fc393521 | kaledarshan/news | [
"license:openrail",
"region:us"
]
| 2022-12-22T08:38:56+00:00 | {"license": "openrail"} | 2022-12-22T08:41:45+00:00 |
|
4c5bc8203ba80efd4b1edb45193d2c148286d8c1 | musicakamusic/piano | [
"license:gpl-3.0",
"region:us"
]
| 2022-12-22T09:00:18+00:00 | {"license": "gpl-3.0"} | 2022-12-23T08:15:37+00:00 |
|
3028680b3026b981a4bf7bae6e6ba222077c1b90 | # Dataset Card for "diachronia-ocr-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zombely/diachronia-ocr-dev | [
"region:us"
]
| 2022-12-22T09:47:47+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 22012156.0, "num_examples": 22}], "download_size": 22013969, "dataset_size": 22012156.0}} | 2022-12-22T09:48:12+00:00 |
9ad0b9fd007e001a4d4eba61eaa75f965224aa20 | GoldenTanuki/shinmegaten | [
"license:other",
"region:us"
]
| 2022-12-22T10:55:15+00:00 | {"license": "other"} | 2022-12-22T10:55:16+00:00 |
|
f3c4858e6c681c2f83232b82f5e0e6559f248bb0 | Glac1er/Glataset | [
"license:unknown",
"region:us"
]
| 2022-12-22T11:16:24+00:00 | {"license": "unknown"} | 2022-12-25T08:58:07+00:00 |
|
36650200e1356bb23e74699ec19a845821f1a6c4 | nekofura/Pottsness_NekoFi | [
"license:openrail",
"region:us"
]
| 2022-12-22T11:29:17+00:00 | {"license": "openrail"} | 2022-12-22T11:29:19+00:00 |
|
46d69af917fa77d952c487f8e7f88c27d4cb848f | # Dataset Card for "fiszki-ocr-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zombely/fiszki-ocr-dev | [
"region:us"
]
| 2022-12-22T12:40:51+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 430755239.0, "num_examples": 102}], "download_size": 430685891, "dataset_size": 430755239.0}} | 2022-12-22T12:43:44+00:00 |
a68ec0be8993c24c10c71fc75c02e38526b80aa0 | # Dataset Card for "fiszki-ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zombely/fiszki-ocr-test-A | [
"region:us"
]
| 2022-12-22T12:43:49+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 379639540.0, "num_examples": 91}], "download_size": 379576204, "dataset_size": 379639540.0}} | 2022-12-22T12:46:07+00:00 |
277fc99fa4cbc40563286a89a0a9057b89f31a19 | Glac1er/Glac1a | [
"license:unknown",
"region:us"
]
| 2022-12-22T12:44:35+00:00 | {"license": "unknown"} | 2022-12-22T13:08:59+00:00 |
|
55142e51668207b21f68e40a99e9aed0655e3a3f |
# Dataset Card for Horse-30
## Dataset Description
- **Homepage:** horse10.deeplabcut.org
- **Repository:** https://github.com/DeepLabCut/DeepLabCut
- **Paper:** Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.}, title = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {1859-1868}
- **Leaderboard:** https://paperswithcode.com/sota/animal-pose-estimation-on-horse-10?p=pretraining-boosts-out-of-domain-robustness
- **Point of Contact:** Mackenzie Mathis
### Dataset Summary
Pose estimation is an important tool for measuring behavior, and thus widely used in technology, medicine and biology. Due to innovations in both deep learning algorithms and large-scale datasets pose estimation on humans has gotten very powerful. However, typical human pose estimation benchmarks, such as MPII pose and COCO, contain many different individuals (>10K) in different contexts, but only very few example postures per individual. In real world application of pose estimation, users want to estimate the location of user-defined bodyparts by only labeling a few hundred frames on a small subset of individuals, yet want this to generalize to new individuals. Thus, one naturally asks the following question: Assume you have trained an algorithm that performs with high accuracy on a given (individual) animal for the whole repertoire of movement - how well will it generalize to different individuals that have slightly or a dramatically different appearance? Unlike in common human pose estimation benchmarks here the setting is that datasets have many (annotated) poses per individual (>200) but only few individuals (1-25).
To allow the field to tackle this challenge, we developed a novel benchmark, called Horse-10, comprising 30 diverse Thoroughbred horses, for which 22 body parts were labeled by an expert in 8,114 frames. Horses have various coat colors and the “in-the-wild” aspect of the collected data at various Thoroughbred yearling sales and farms added additional complexity.
- **Homepage:** horse10.deeplabcut.org
- **Repository:** https://github.com/DeepLabCut/DeepLabCut
- **Paper:** `{Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.}, title = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {1859-1868} `
- **Leaderboard:** https://paperswithcode.com/sota/animal-pose-estimation-on-horse-10?p=pretraining-boosts-out-of-domain-robustness
- **Point of Contact:** Mackenzie Mathis
### Supported Tasks and Leaderboards
Horse-10 task: Train on a subset of individuals (10) and evaluate on held-out “out-of-domain” horses (20).
### Languages
Python, deeplabcut, tensorflow, pytorch
## Dataset Structure
### Data Instances
Over 8,000 expertly labeled frames across 30 individual thoroughbred horses
### Data Splits
The ground truth training data is provided as 3 splits of 10 Horses each. The download provides you a project compatible with loading into the deeplabcut framework, but ground truth labels/training data can be easily loaded in pandas to accommodate your framework (example loader here).
Please do NOT train on all three splits simultaneously. You must train independently (as some horses can be considered out-of-domain in other splits for evaluation!). Integrity matters!
The download also includes all of Horse-30 images and annotations (thus is ~850MB).
| mwmathis/Horse-30 | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2022-12-22T13:50:49+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2023-04-19T08:59:42+00:00 |
45f8e5d34bb4489aac82fb6944ef7f73aaaba45e | # Dataset Card for "aveyron_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yacine-djm/aveyron_test | [
"region:us"
]
| 2022-12-22T16:22:27+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "sequence": "string"}, {"name": "date", "dtype": "string"}, {"name": "sheet_id", "dtype": "string"}, {"name": "group_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "est", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 132770347, "num_examples": 530910}], "download_size": 49780765, "dataset_size": 132770347}} | 2022-12-22T16:22:53+00:00 |
64fd53cc91f7cb73b283a6e4f661205e277d23c9 | # Dataset Card for "rm-static"
Split of [hh-static](https://huggingface.co/datasets/Dahoas/static-hh) used for training reward models after supervised fine-tuning. | Dahoas/rm-static | [
"region:us"
]
| 2022-12-22T16:50:14+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113850006, "num_examples": 76256}, {"name": "test", "num_bytes": 7649255, "num_examples": 5103}], "download_size": 73006535, "dataset_size": 121499261}} | 2023-03-06T00:13:07+00:00 |
981e3f95845b9cdd54de5847725da3f12dd9da84 |
# Dataset Card for OLM December 2022 Wikipedia
Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from a December 2022 Wikipedia snapshot. | olm/olm-wikipedia-20221220 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
]
| 2022-12-22T17:38:13+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM December 2022 Wikipedia", "tags": ["pretraining", "language modelling", "wikipedia", "web"]} | 2022-12-29T03:12:35+00:00 |
54c3e55fb8098ea30e1dd1db7bfb9fcedfcefaed | CSAle/coolbowl | [
"license:openrail",
"region:us"
]
| 2022-12-22T17:53:22+00:00 | {"license": "openrail"} | 2022-12-22T17:56:05+00:00 |
|
c39321e04c470e0b24cde3b04da7e11790aff59c | massi/capitals | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2022-12-22T17:56:04+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-12-22T17:57:03+00:00 |
|
62e85e84102b5ebb6a412cf0f4245227ea897a08 |
## Dataset Description
- **Repository:** https://github.com/shuyanzhou/docprompting
- **Paper:** [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/pdf/2207.05987.pdf)
### Dataset Summary
This is the natural language to bash generation dataset we harvested from the English subset of [`tldr`](https://github.com/tldr-pages/tldr)
We split the dataset by bash commands. Every command in the dev and test set is held out from the training set.
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Bash
## Dataset Structure
```python
dataset = load_dataset("neulab/tldr")
DatasetDict({
train: Dataset({
features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
num_rows: 6414
})
test: Dataset({
features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
num_rows: 928
})
validation: Dataset({
features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'],
num_rows: 1845
})
})
code_docs = load_dataset("neulab/docprompting-conala", "docs")
DatasetDict({
train: Dataset({
features: ['doc_id', 'doc_content'],
num_rows: 439064
})
})
```
### Data Fields
train/dev/test:
- nl: The natural language intent
- cmd: The reference code snippet
- question_id: the unique id of a question
- oracle_man: The `doc_id` of the functions used in the reference code snippet. The corresponding contents are in `doc` split
- cmd_name: the bash command of this code snippet
- tldr_cmd_name: the bash command used in tldr github repo. The `cmd_name` and `tldr_cmd_name` can be different due to naming difference
- manual_exist: whether the manual exists in https://manned.org
- matching_info: each code snippets have multiple tokens, this is the detailed reference doc matching on each token.
docs:
- doc_id: the id of a doc
- doc_content: the content of the doc
## Dataset Creation
The dataset was curated from [`tldr`](https://github.com/tldr-pages/tldr).
The project aims to provide frequent usage of bash commands with natural language intents.
For more details, please check the repo.
### Citation Information
```
@article{zhou2022doccoder,
title={DocCoder: Generating Code by Retrieving and Reading Docs},
author={Zhou, Shuyan and Alon, Uri and Xu, Frank F and Jiang, Zhengbao and Neubig, Graham},
journal={arXiv preprint arXiv:2207.05987},
year={2022}
}
``` | neulab/tldr | [
"task_categories:text2text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:mit",
"code-generation",
"doc retrieval",
"retrieval augmented generation",
"arxiv:2207.05987",
"region:us"
]
| 2022-12-22T17:58:43+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "DocPrompting-CoNaLa", "tags": ["code-generation", "doc retrieval", "retrieval augmented generation"]} | 2022-12-22T19:47:11+00:00 |
d4db0a6779de3121a5d07db1e233b52b8d6c395b | ponzimaxi/transcriptions | [
"region:us"
]
| 2022-12-22T18:12:12+00:00 | {} | 2022-12-22T18:37:18+00:00 |
|
5da017b06ca90943494f28c8333a71077e3736b3 | https://mrcheeze.github.io/musenet-midi/ is used to get the musenet encoding.
where i got the midi files I used to convert into "codes" are all from https://bitmidi.com
| breadlicker45/midi-music-codes | [
"region:us"
]
| 2022-12-22T18:18:41+00:00 | {} | 2023-01-10T12:37:38+00:00 |
2b44b4ec48ad387eba13c1b0380171534d3d21df | breadlicker45/midi-gpt-music-small | [
"license:other",
"region:us"
]
| 2022-12-22T18:39:48+00:00 | {"license": "other"} | 2022-12-22T18:40:31+00:00 |
|
e3ed6e30d766fe344677300723a6064eb7d16554 | ponzimaxi/uponlytranscriptions | [
"region:us"
]
| 2022-12-22T18:40:35+00:00 | {} | 2022-12-23T01:17:16+00:00 |
|
547fa98342d2c80734a1c841dbf3de2cfcaaab95 | # Dataset Card for "financial_news_sentiment"
Manually validated sentiment for ~2000 Canadian news articles.
The dataset also include a column topic which contains one of the following value:
* acquisition
* other
* quaterly financial release
* appointment to new position
* dividend
* corporate update
* drillings results
* conference
* share repurchase program
* grant of stocks
This was generated automatically using a zero-shot classification model and **was not** reviewed manually. | Jean-Baptiste/financial_news_sentiment | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
]
| 2022-12-22T18:49:05+00:00 | {"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "pretty_name": "financial_news_sentiment", "dataset_info": {"splits": [{"name": "test", "num_examples": 267}, {"name": "train", "num_examples": 1512}]}, "tags": []} | 2022-12-29T03:14:44+00:00 |
dab57d94d2f44f286477933ae39f7764eca37c2c | # Dataset Card for "dataset-v-1.4_CLIP_us_identities_random_seeds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SDbiaseval/dataset-v-1.4_CLIP_us_identities_random_seeds | [
"region:us"
]
| 2022-12-22T19:10:30+00:00 | {"dataset_info": {"features": [{"name": "adjective", "dtype": "string"}, {"name": "profession", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "gender", "dtype": "string"}, {"name": "identity", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1172794597.5, "num_examples": 31500}], "download_size": 1167645236, "dataset_size": 1172794597.5}} | 2022-12-22T19:11:43+00:00 |
54fb216b93e85cbeef482ce1fd13194a519c6382 | # AutoTrain Dataset for project: auto-arabic-summarization
## Dataset Description
This dataset has been automatically processed by AutoTrain for project auto-arabic-summarization.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u0627\u0643\u062f \u0648\u0632\u064a\u0631 \u0627\u0644\u0635\u0646\u0627\u0639\u0647 \u0648\u0627\u0644\u0637\u0627\u0642\u0647 \u0648\u0627\u0644\u0645\u0646\u0627\u062c\u0645 \u0632\u0643\u0631\u064a\u0627 \u062d\u0645\u062f \u0627\u0646\u0647 \u062a\u0645 \u0627\u0644\u064a\u0648\u0645 \u0627\u0644\u062e\u0645\u064a\u0633 \u062e\u0644\u0627\u0644 \u062c\u0644\u0633\u0647 \u0627\u0644\u062a\u0627\u0645\u062a \u0628\u0627\u0644\u0639\u0627\u0635\u0645\u0647 \u0648\u0632\u064a\u0631 \u0627\u0644\u0637\u0627\u0642\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u064a \u0635\u0627\u0644\u062d \u062e\u0628\u0631\u064a \u0628\u062e\u0635\u0648\u0635 \u0627\u0634\u063a\u0627\u0644 \u0627\u0644\u0644\u062c\u0646\u0647 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a\u0647 \u0645\u062c\u0627\u0644 \u0627\u0644\u0637\u0627\u0642\u0647 \u0644\u062a\u0642\u064a\u064a\u0645 \u0645\u062f\u0649 \u062a\u0637\u0628\u064a\u0642 \u0627\u0644\u0628\u0631\u0627\u0645\u062c \u0627\u0644\u0645\u062a\u0641\u0642 \u0639\u0644\u064a\u0647\u0627 \u062e\u0628\u0631\u0627\u0621 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0627\u0644\u0627\u062a\u0641\u0627\u0642 \u062a\u0632\u0648\u064a\u062f \u0627\u0644\u0645\u0646\u0627\u0637\u0642 \u0627\u0644\u062d\u062f\u0648\u062f\u064a\u0647 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0628\u0627\u0644\u0643\u0645\u064a\u0627\u062a \u0627\u0644\u0643\u0627\u0641\u064a\u0647 \u0642\u0648\u0627\u0631\u064a\u0631 \u0627\u0644\u063a\u0627\u0632 \u0627\u0644\u0645\u0646\u0632\u0644\u064a \u062a\u0642\u062f\u0631 \u0628\u062d\u0648\u0627\u0644\u064a \u0637\u0646 \u0627\u0644\u0642\u0648\u0627\u0631\u064a\u0631 \u0648\u0627\u0636\u0627\u0641 \u062d\u0645\u062f \u0627\u0646\u0647 \u0627\u0644\u0646\u0642\u0627\u0637 \u062a\u0645 \u0627\u0644\u0627\u062a\u0641\u0627\u0642 \u0628\u0634\u0627\u0646\u0647\u0627 \u062c\u0644\u0633\u0647 \u0627\u0644\u064a\u0648\u0645 \u062a\u0632\u0648\u064a\u062f \u0627\u0644\u0633\u0648\u0642 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0628\u0627\u0644\u063a\u0627\u0632 \u0627\u0644\u0637\u0628\u064a\u0639\u064a \u0639\u0628\u0631 \u0627\u0644\u0627\u0646\u0627\u0628\u064a\u0628 \u0648\u062a\u0632\u0648\u064a\u062f \u0627\u0644\u0645\u0646\u0627\u0637\u0642 \u0628\u0627\u0644\u0628\u062a\u0631\u0648\u0644 \u0627\u0644\u0645\u0633\u0627\u0644 \u0627\u0636\u0627\u0641\u0647 \u0627\u0644\u0649 \u062f\u0639\u0645 \u0627\u0644\u062a\u0639\u0627\u0648\u0646 \u0627\u0644\u0645\u062c\u0627\u0644 \u0627\u0644\u062a\u062c\u0627\u0631\u064a \u062a\u0645 \u0627\u0645\u0636\u0627\u0621 \u0645\u0630\u0643\u0631\u0647 \u062a\u0641\u0627\u0647\u0645 \u0639\u0642\u062f \u0644\u062a\u0643\u0648\u064a\u0646 \u062a\u0642\u0646\u0646\u064a\u0646 \u062a\u0648\u0646\u0633\u064a\u064a\u0646 \u0627\u0644\u062c\u0632\u0627\u0626\u0631",
"target": "\u0643\u0645\u0627 \u062a\u0645 \u0627\u0645\u0636\u0627\u0621 \u0645\u0630\u0643\u0631\u0629 \u062a\u0641\u0627\u0647\u0645 \u0639\u0642\u062f \u0644\u062a\u0643\u0648\u064a\u0646 \u062a\u0642\u0646\u0646\u064a\u0646 \u062a\u0648\u0646\u0633\u064a\u064a\u0646 \u0641\u064a \u0627\u0644\u062c\u0632\u0627\u0626\u0631 ."
},
{
"text": "\u0642\u0627\u0644 \u0627\u0644\u0648\u0632\u064a\u0631 \u0627\u0644\u0627\u0648\u0644 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a \u0639\u0628\u062f \u0627\u0644\u0645\u0627\u0644\u0643 \u0633\u0644\u0627\u0644 \u0627\u062b\u0631 \u0644\u0642\u0627\u0621 \u062c\u0645\u0639\u0647 \u0628\u0631\u0626\u064a\u0633 \u0645\u062c\u0644\u0633 \u0646\u0648\u0627\u0628 \u0627\u0644\u0634\u0639\u0628 \u0645\u062d\u0645\u062f \u0627\u0644\u0646\u0627\u0635\u0631 \u0627\u0644\u0639\u0644\u0627\u0642\u0627\u062a \u0627\u0644\u062b\u0646\u0627\u0626\u064a\u0647 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0645\u0645\u064a\u0632\u0647 \u0648\u0633\u062a\u0643\u0648\u0646 \u0627\u062d\u0633\u0646 \u062e\u0644\u0627\u0644 \u0627\u0644\u0641\u062a\u0631\u0647 \u0627\u0644\u0642\u0627\u062f\u0645\u0647 \u0648\u0627\u0636\u0627\u0641 \u062a\u0635\u0631\u064a\u062d \u0644\u0645\u0631\u0627\u0633\u0644 \u0627\u0644\u062c\u0648\u0647\u0631\u0647 \u0627\u0641 \u0627\u0645 \u0627\u0646\u0647 \u0639\u0627\u0647\u062f \u0631\u0626\u064a\u0633 \u0627\u0644\u0645\u062c\u0644\u0633 \u0628\u0627\u0644\u0645\u062d\u0627\u0641\u0638\u0647 \u0645\u062a\u0627\u0646\u0647 \u0627\u0644\u0639\u0644\u0627\u0642\u0647 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0648\u0645\u0648\u0627\u0635\u0644\u0647 \u0627\u0644\u062a\u0642\u062f\u0645 \u0648\u0627\u0644\u0639\u0645\u0644 \u0645\u0639\u0627 \u0648\u0627\u0648\u0636\u062d \u0639\u0628\u062f \u0627\u0644\u0645\u0627\u0644\u0643 \u0633\u0644\u0627\u0644 \u0645\u062d\u0645\u062f \u0627\u0644\u0646\u0627\u0635\u0631 \u0627\u0628\u062f\u0649 \u062f\u0639\u0645\u0647 \u0644\u0644\u0645\u0646\u0647\u062c \u062a\u0646\u062a\u0647\u062c\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u0631 \u0648\u0639\u0645\u0644\u0647\u0627 \u0648\u064a\u0627\u062a\u064a \u0627\u062c\u062a\u0645\u0627\u0639 \u0627\u0644\u0648\u0632\u064a\u0631 \u0627\u0644\u0627\u0648\u0644 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a \u0628\u0631\u0626\u064a\u0633 \u0627\u0644\u0645\u062c\u0644\u0633 \u0647\u0627\u0645\u0634 \u0632\u064a\u0627\u0631\u0647 \u0639\u0645\u0644 \u0627\u062f\u0627\u0647\u0627 \u0627\u0644\u064a\u0648\u0645 \u0627\u0644\u062e\u0645\u064a\u0633 \u062a\u0648\u0646\u0633 \u062a\u0631\u0627\u0633 \u062e\u0644\u0627\u0644\u0647\u0627 \u0627\u0634\u063a\u0627\u0644 \u0627\u0644\u062f\u0648\u0631\u0647 \u0627\u0644 \u0644\u0644\u062c\u0646\u0647 \u0627\u0644\u0645\u062e\u062a\u0644\u0637\u0647 \u0627\u0644\u0639\u0644\u064a\u0627 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a\u0647 \u0631\u0641\u0642\u0647 \u0631\u0626\u064a\u0633 \u0627\u0644\u062d\u0643\u0648\u0645\u0647 \u064a\u0648\u0633\u0641 \u0627\u0644\u0634\u0627\u0647\u062f \u0648\u0627\u0644\u062a\u064a \u0627\u0646\u062a\u0647\u062a \u0628\u0627\u0644\u0645\u0635\u0627\u062f\u0642\u0647 \u0639\u062f\u064a\u062f \u0627\u0644\u0627\u062a\u0641\u0627\u0642\u064a\u0627\u062a \u062a\u0648\u0646\u0633 \u0648\u0627\u0644\u062c\u0632\u0627\u0626\u0631",
"target": "\n\u0642\u0627\u0644 \u0627\u0644\u0648\u0632\u064a\u0631 \u0627\u0644\u0623\u0648\u0644 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a \u0639\u0628\u062f \u0627\u0644\u0645\u0627\u0644\u0643 \u0633\u0644\u0627\u0644 \u0627\u062b\u0631 \u0644\u0642\u0627\u0621 \u062c\u0645\u0639\u0647 \u0628\u0631\u0626\u064a\u0633 \u0645\u062c\u0644\u0633 \u0646\u0648\u0627\u0628 \u0627\u0644\u0634\u0639\u0628 \u0645\u062d\u0645\u062f \u0627\u0644\u0646\u0627\u0635\u0631\u060c \u0625\u0646 \u0627\u0644\u0639\u0644\u0627\u0642\u0627\u062a \u0627\u0644\u062b\u0646\u0627\u0626\u064a\u0629 \u0628\u064a\u0646 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0645\u0645\u064a\u0632\u0629 \u0648\u0633\u062a\u0643\u0648\u0646 \u0623\u062d\u0633\u0646 \u062e\u0644\u0627\u0644 \u0627\u0644\u0641\u062a\u0631\u0629 \u0627\u0644\u0642\u0627\u062f\u0645\u0629."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5102 |
| valid | 1276 |
| abdalrahmanshahrour/autotrain-data-auto-arabic-summarization | [
"task_categories:summarization",
"region:us"
]
| 2022-12-22T19:15:40+00:00 | {"task_categories": ["summarization"]} | 2022-12-22T19:20:04+00:00 |
1581ff88ec3f1a1203602653038d1bd80f860845 | TomTBT/pmc_open_access_figure_noncomm | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2022-12-22T20:29:34+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-12-24T10:54:06+00:00 |
|
64bb548ebe3f6db721e96520b8b0b18d7633aa47 |
# DreamBooth model for pugsly trained by lewtun on the Shirleyphd/Pug dataset.
This is a Stable Diffusion model fine-tuned the ccorgi concept taught to Stable Diffusion with DreamBooth.
It can be used by modifying the `instance_prompt`: **a photo of pugsly dog**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Shirleyphd/Pug-dog')
image = pipeline().images[0]
image
``` | Shirleyphd/Pug | [
"license:creativeml-openrail-m",
"pytorch",
"diffusers",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"region:us"
]
| 2022-12-22T21:05:20+00:00 | {"license": "creativeml-openrail-m", "tags": ["pytorch", "diffusers", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "animal"], "widget": [{"text": "a photo of pug dog in a cup"}]} | 2022-12-22T21:39:10+00:00 |
ab589dfd37cb6b1c3cd420c8c922389e36546b9a | # Dataset Card for "ev-skins-blip-lg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jubba/ev-skins-blip-lg | [
"region:us"
]
| 2022-12-22T21:10:48+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13554378.0, "num_examples": 215}], "download_size": 13363408, "dataset_size": 13554378.0}} | 2022-12-22T21:10:54+00:00 |
5c157f65faed02d46d37c30068d727a4a8dad7bf | # Dataset Card for "booksum-fullbooks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | abertsch/booksum-fullbooks | [
"region:us"
]
| 2022-12-22T21:43:49+00:00 | {"dataset_info": {"features": [{"name": "bid", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "book", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 23586559, "num_examples": 45}, {"name": "train", "num_bytes": 165182724, "num_examples": 314}, {"name": "test", "num_bytes": 31094987, "num_examples": 46}], "download_size": 60336046, "dataset_size": 219864270}} | 2022-12-22T21:44:19+00:00 |
53636a063ebd11185358ca12b6a180b3333e0559 | sirfragles/mzpl | [
"license:unknown",
"region:us"
]
| 2022-12-22T22:26:23+00:00 | {"license": "unknown"} | 2022-12-22T22:27:58+00:00 |
|
ebe18349a55891244f20dbcd8513f1c349c0c4b4 | # Dataset Card for "stable-diffusion-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yizhangliu/stable-diffusion-prompts | [
"region:us"
]
| 2022-12-23T00:17:08+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 284636284, "num_examples": 1819808}], "download_size": 101931398, "dataset_size": 284636284}} | 2022-12-23T00:17:18+00:00 |
9e2ff4a5421f94f5aa54719a26706ba798ebb546 | ```
@inproceedings{larson-etal-2019-evaluation,
title = "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction",
author = "Larson, Stefan and
Mahendran, Anish and
Peper, Joseph J. and
Clarke, Christopher and
Lee, Andrew and
Hill, Parker and
Kummerfeld, Jonathan K. and
Leach, Kevin and
Laurenzano, Michael A. and
Tang, Lingjia and
Mars, Jason",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
year = "2019",
url = "https://www.aclweb.org/anthology/D19-1131"
}
``` | fathyshalab/clinic-travel | [
"region:us"
]
| 2022-12-23T01:25:22+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89996.9, "num_examples": 1050}, {"name": "test", "num_bytes": 38570.1, "num_examples": 450}], "download_size": 0, "dataset_size": 128567.0}} | 2023-05-15T07:52:38+00:00 |
aa0d4736529949e270758000e7236891d84aab0e | guangguang/azukijpg | [
"license:apache-2.0",
"region:us"
]
| 2022-12-23T03:02:18+00:00 | {"license": "apache-2.0"} | 2022-12-23T05:58:45+00:00 |
|
088804c86ffc82367bf84c51500f09701919cada | TBA | research-backup/qa_squadshifts_synthetic_random | [
"region:us"
]
| 2022-12-23T03:26:02+00:00 | {} | 2023-01-15T18:58:41+00:00 |
0b08b472ef2ae69b414b340f8219a2d237fbc350 | ```
@inproceedings{larson-etal-2019-evaluation,
title = "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction",
author = "Larson, Stefan and
Mahendran, Anish and
Peper, Joseph J. and
Clarke, Christopher and
Lee, Andrew and
Hill, Parker and
Kummerfeld, Jonathan K. and
Leach, Kevin and
Laurenzano, Michael A. and
Tang, Lingjia and
Mars, Jason",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
year = "2019",
url = "https://www.aclweb.org/anthology/D19-1131"
}
``` | fathyshalab/clinic-auto_and_commute | [
"region:us"
]
| 2022-12-23T03:50:19+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80879.4, "num_examples": 1050}, {"name": "test", "num_bytes": 34662.6, "num_examples": 450}], "download_size": 0, "dataset_size": 115542.0}} | 2023-05-15T07:50:42+00:00 |
1c618a2187fa18e3558c672cbb40dd1f95c284d0 | silentmobius28/consolidated_appropriations_act_2023 | [
"license:gpl-3.0",
"region:us"
]
| 2022-12-23T04:17:38+00:00 | {"license": "gpl-3.0"} | 2022-12-23T04:28:35+00:00 |
|
eccfbafa9060b01cb8e3d0d4e4c0b6d4b5605150 |
Dataset Summary
---
Collection of Romance Novels featuring `title`, `description`, and `genres`. Created with intention of building a "Romance Novel Generator."
Data Fields
---
- `id` : unique integer to id book in the dataset
- `pub_month` : string indicating the month the book was published in the form: `YEAR_MONTH`
- `title` : title of the book
- `author` : comma-separated (`last-name, first-name`) of the author of book
- `isbn13` : 13 digit number for the isbn of book (note not all books will have an isbn number)
- `description` : text description of the book. May contain quoted lines, a brief teaser of the plot, etc...
- `genres` : dictionary of all genres with 1 or 0 indicating if genre is present
- `womens-fiction` : 1 or 0 indicating if genre is present
- `abuse` : 1 or 0 indicating if genre is present
- `accidental-pregnancy` : 1 or 0 indicating if genre is present
- `action-adventure` : 1 or 0 indicating if genre is present
- `actor-actress-dancer-model` : 1 or 0 indicating if genre is present
- `adoption` : 1 or 0 indicating if genre is present
- `adultery` : 1 or 0 indicating if genre is present
- `african-american` : 1 or 0 indicating if genre is present
- `alcoholism` : 1 or 0 indicating if genre is present
- `aliens` : 1 or 0 indicating if genre is present
- `alpha-hero` : 1 or 0 indicating if genre is present
- `alternative-history` : 1 or 0 indicating if genre is present
- `amateur-sleuth` : 1 or 0 indicating if genre is present
- `americana` : 1 or 0 indicating if genre is present
- `amish` : 1 or 0 indicating if genre is present
- `amnesia` : 1 or 0 indicating if genre is present
- `angels` : 1 or 0 indicating if genre is present
- `animals` : 1 or 0 indicating if genre is present
- `anthropologists-archeologists` : 1 or 0 indicating if genre is present
- `apocalypse` : 1 or 0 indicating if genre is present
- `arranged-marriage` : 1 or 0 indicating if genre is present
- `arthurian-legend` : 1 or 0 indicating if genre is present
- `asian-american` : 1 or 0 indicating if genre is present
- `astrology` : 1 or 0 indicating if genre is present
- `bbw-heroines` : 1 or 0 indicating if genre is present
- `bad-boy` : 1 or 0 indicating if genre is present
- `best-friends` : 1 or 0 indicating if genre is present
- `beta-hero` : 1 or 0 indicating if genre is present
- `biographical` : 1 or 0 indicating if genre is present
- `blackmail` : 1 or 0 indicating if genre is present
- `boarding-school` : 1 or 0 indicating if genre is present
- `captor-captive` : 1 or 0 indicating if genre is present
- `category-romance` : 1 or 0 indicating if genre is present
- `celebrities` : 1 or 0 indicating if genre is present
- `celts` : 1 or 0 indicating if genre is present
- `chefs-foodies` : 1 or 0 indicating if genre is present
- `chick-lit` : 1 or 0 indicating if genre is present
- `christian` : 1 or 0 indicating if genre is present
- `clean-&-wholesome` : 1 or 0 indicating if genre is present
- `clones` : 1 or 0 indicating if genre is present
- `comedy-humor` : 1 or 0 indicating if genre is present
- `coming-of-age` : 1 or 0 indicating if genre is present
- `contemporary-romance` : 1 or 0 indicating if genre is present
- `cowboys` : 1 or 0 indicating if genre is present
- `cozy-mystery` : 1 or 0 indicating if genre is present
- `crime` : 1 or 0 indicating if genre is present
- `dark-fantasy` : 1 or 0 indicating if genre is present
- `death-dying` : 1 or 0 indicating if genre is present
- `debutante-heiress` : 1 or 0 indicating if genre is present
- `demons` : 1 or 0 indicating if genre is present
- `disabilities` : 1 or 0 indicating if genre is present
- `divorce` : 1 or 0 indicating if genre is present
- `doctor-nurse` : 1 or 0 indicating if genre is present
- `dragons` : 1 or 0 indicating if genre is present
- `dystopian` : 1 or 0 indicating if genre is present
- `elves` : 1 or 0 indicating if genre is present
- `enemies-to-lovers` : 1 or 0 indicating if genre is present
- `epic-fantasy` : 1 or 0 indicating if genre is present
- `erotica` : 1 or 0 indicating if genre is present
- `espionage-spies-cia` : 1 or 0 indicating if genre is present
- `fairies-fae` : 1 or 0 indicating if genre is present
- `fairy-tales-folklore` : 1 or 0 indicating if genre is present
- `fake-relationship` : 1 or 0 indicating if genre is present
- `falsely-accused` : 1 or 0 indicating if genre is present
- `family-siblings` : 1 or 0 indicating if genre is present
- `famous-characters` : 1 or 0 indicating if genre is present
- `fantasy` : 1 or 0 indicating if genre is present
- `fantasy-romance` : 1 or 0 indicating if genre is present
- `feminism` : 1 or 0 indicating if genre is present
- `firefighters` : 1 or 0 indicating if genre is present
- `forced-proximity` : 1 or 0 indicating if genre is present
- `forensics` : 1 or 0 indicating if genre is present
- `friends-to-lovers` : 1 or 0 indicating if genre is present
- `general-fiction` : 1 or 0 indicating if genre is present
- `ghosts` : 1 or 0 indicating if genre is present
- `gothic` : 1 or 0 indicating if genre is present
- `graphic-novel` : 1 or 0 indicating if genre is present
- `guardian-ward` : 1 or 0 indicating if genre is present
- `hard-boiled` : 1 or 0 indicating if genre is present
- `heroic-fantasy-sword-&-sorcery` : 1 or 0 indicating if genre is present
- `hidden-identity` : 1 or 0 indicating if genre is present
- `hispanic-&-latino` : 1 or 0 indicating if genre is present
- `historical` : 1 or 0 indicating if genre is present
- `historical-mystery` : 1 or 0 indicating if genre is present
- `historical-romance` : 1 or 0 indicating if genre is present
- `holidays` : 1 or 0 indicating if genre is present
- `horror` : 1 or 0 indicating if genre is present
- `infidelity` : 1 or 0 indicating if genre is present
- `jane-austen` : 1 or 0 indicating if genre is present
- `jewish` : 1 or 0 indicating if genre is present
- `kidnapping` : 1 or 0 indicating if genre is present
- `kids-(12-&-under)` : 1 or 0 indicating if genre is present
- `kids:-middle-grade` : 1 or 0 indicating if genre is present
- `lgbtq` : 1 or 0 indicating if genre is present
- `law-enforcement` : 1 or 0 indicating if genre is present
- `lawyers` : 1 or 0 indicating if genre is present
- `legal-thriller` : 1 or 0 indicating if genre is present
- `literary` : 1 or 0 indicating if genre is present
- `magic` : 1 or 0 indicating if genre is present
- `magical-realism` : 1 or 0 indicating if genre is present
- `mail-order-brides` : 1 or 0 indicating if genre is present
- `manga` : 1 or 0 indicating if genre is present
- `marriage-of-convenience` : 1 or 0 indicating if genre is present
- `mashup` : 1 or 0 indicating if genre is present
- `mature-(18-&-over)` : 1 or 0 indicating if genre is present
- `may-december` : 1 or 0 indicating if genre is present
- `medical` : 1 or 0 indicating if genre is present
- `medical-thriller` : 1 or 0 indicating if genre is present
- `mermaids` : 1 or 0 indicating if genre is present
- `military` : 1 or 0 indicating if genre is present
- `mistaken-identity` : 1 or 0 indicating if genre is present
- `monsters` : 1 or 0 indicating if genre is present
- `motorcycle-club-bikers` : 1 or 0 indicating if genre is present
- `moviestv` : 1 or 0 indicating if genre is present
- `multicultural-&-interracial-romance` : 1 or 0 indicating if genre is present
- `music` : 1 or 0 indicating if genre is present
- `mystery` : 1 or 0 indicating if genre is present
- `mythology` : 1 or 0 indicating if genre is present
- `native-americans` : 1 or 0 indicating if genre is present
- `nautical` : 1 or 0 indicating if genre is present
- `navy-seals` : 1 or 0 indicating if genre is present
- `new-adult-(18-25)` : 1 or 0 indicating if genre is present
- `noir` : 1 or 0 indicating if genre is present
- `occult-&-supernatural` : 1 or 0 indicating if genre is present
- `office-romance` : 1 or 0 indicating if genre is present
- `opposites-attract` : 1 or 0 indicating if genre is present
- `orphans` : 1 or 0 indicating if genre is present
- `paranormal` : 1 or 0 indicating if genre is present
- `paranormal-romance` : 1 or 0 indicating if genre is present
- `pirates` : 1 or 0 indicating if genre is present
- `police-lawmen-fbi-agents` : 1 or 0 indicating if genre is present
- `police-procedural` : 1 or 0 indicating if genre is present
- `political` : 1 or 0 indicating if genre is present
- `political-thriller` : 1 or 0 indicating if genre is present
- `post-apocalyptic` : 1 or 0 indicating if genre is present
- `pregnancy` : 1 or 0 indicating if genre is present
- `private-investigator` : 1 or 0 indicating if genre is present
- `psychological-suspense` : 1 or 0 indicating if genre is present
- `rags-to-riches` : 1 or 0 indicating if genre is present
- `rakes` : 1 or 0 indicating if genre is present
- `reincarnation` : 1 or 0 indicating if genre is present
- `revenge` : 1 or 0 indicating if genre is present
- `robin-hood` : 1 or 0 indicating if genre is present
- `rock-stars` : 1 or 0 indicating if genre is present
- `romance` : 1 or 0 indicating if genre is present
- `romantic-elements` : 1 or 0 indicating if genre is present
- `romantic-suspense` : 1 or 0 indicating if genre is present
- `royalty` : 1 or 0 indicating if genre is present
- `saga` : 1 or 0 indicating if genre is present
- `schools` : 1 or 0 indicating if genre is present
- `science-fiction` : 1 or 0 indicating if genre is present
- `science-fiction-fantasy` : 1 or 0 indicating if genre is present
- `scottish-highlands` : 1 or 0 indicating if genre is present
- `second-chance-romance` : 1 or 0 indicating if genre is present
- `secret-baby` : 1 or 0 indicating if genre is present
- `serial-killers` : 1 or 0 indicating if genre is present
- `servants-slaves` : 1 or 0 indicating if genre is present
- `shakespeare` : 1 or 0 indicating if genre is present
- `sheikhs` : 1 or 0 indicating if genre is present
- `sherlock-holmes` : 1 or 0 indicating if genre is present
- `single-parent` : 1 or 0 indicating if genre is present
- `small-town` : 1 or 0 indicating if genre is present
- `space-opera` : 1 or 0 indicating if genre is present
- `speculative-fiction` : 1 or 0 indicating if genre is present
- `sports` : 1 or 0 indicating if genre is present
- `steampunk` : 1 or 0 indicating if genre is present
- `superheroes` : 1 or 0 indicating if genre is present
- `suspense` : 1 or 0 indicating if genre is present
- `tear-jerker` : 1 or 0 indicating if genre is present
- `technology` : 1 or 0 indicating if genre is present
- `terrorists` : 1 or 0 indicating if genre is present
- `thriller` : 1 or 0 indicating if genre is present
- `time-travel` : 1 or 0 indicating if genre is present
- `tortured-hero` : 1 or 0 indicating if genre is present
- `tortured-heroine` : 1 or 0 indicating if genre is present
- `traditional-british` : 1 or 0 indicating if genre is present
- `traditional-regency` : 1 or 0 indicating if genre is present
- `twins` : 1 or 0 indicating if genre is present
- `tycoons` : 1 or 0 indicating if genre is present
- `ugly-duckling` : 1 or 0 indicating if genre is present
- `unicorns` : 1 or 0 indicating if genre is present
- `urban-fantasy` : 1 or 0 indicating if genre is present
- `vampires` : 1 or 0 indicating if genre is present
- `vikings` : 1 or 0 indicating if genre is present
- `virgin-hero` : 1 or 0 indicating if genre is present
- `virgins` : 1 or 0 indicating if genre is present
- `visionary-&-metaphysical` : 1 or 0 indicating if genre is present
- `wagon-train` : 1 or 0 indicating if genre is present
- `werewolves-shapeshifters` : 1 or 0 indicating if genre is present
- `western` : 1 or 0 indicating if genre is present
- `widow-widower` : 1 or 0 indicating if genre is present
- `witch-warlock-mage-wizard` : 1 or 0 indicating if genre is present
- `women-sleuths` : 1 or 0 indicating if genre is present
- `young-adult-teens` : 1 or 0 indicating if genre is present
- `zombies` : 1 or 0 indicating if genre is present
Languages
---
- en | diltdicker/romance_novel_data-2022 | [
"license:openrail",
"region:us"
]
| 2022-12-23T04:36:09+00:00 | {"license": "openrail"} | 2023-01-07T21:40:31+00:00 |
491ab81e69438980266dbc5eaec0cd6d06d225c2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/flan-t5-large-stacked-samsum-1024-FP32-fin
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-e93f2c-2586578704 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-23T05:37:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/flan-t5-large-stacked-samsum-1024-FP32-fin", "metrics": ["bertscore"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-23T05:40:55+00:00 |
a4c226f0892069e6605de5adf77831ab2d7701bd | akanametov/kingkong-dataset | [
"license:mit",
"region:us"
]
| 2022-12-23T05:54:09+00:00 | {"license": "mit"} | 2022-12-23T05:56:18+00:00 |
|
6b80de530820a7f62f25515d39af18c479f35103 | # Dataset Card for "LLM_Description_Vocab_opt_facebook_opt_30b_downstream_tasks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/LLM_Description_Vocab_opt_facebook_opt_30b_downstream_tasks | [
"region:us"
]
| 2022-12-23T06:10:26+00:00 | {"dataset_info": {"features": [{"name": "vocab", "dtype": "string"}, {"name": "descriptions", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 528559, "num_examples": 3426}], "download_size": 157247, "dataset_size": 528559}} | 2022-12-23T06:10:32+00:00 |
2325dbed42592f0e385dd5a084d81dfb8029724e | # Dataset Card for "clinic-home"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fathyshalab/clinic-home | [
"region:us"
]
| 2022-12-23T06:15:24+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 79109.8, "num_examples": 1050}, {"name": "test", "num_bytes": 33904.2, "num_examples": 450}], "download_size": 0, "dataset_size": 113014.0}} | 2022-12-24T14:09:41+00:00 |
d04eac99131731d8b61ee3754ea328f1d75017a6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: gigaword
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Xiaoci](https://huggingface.co/Xiaoci) for evaluating this model. | autoevaluate/autoeval-eval-gigaword-default-50c095-2587478720 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-23T08:25:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["gigaword"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "gigaword", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-12-23T14:28:33+00:00 |
c76dc41ea0c3fedd6023114f9fd7e962fd5c0015 | This data is a subset of the BioQA task B dataset. It includes only factoid samples for extractive QA and is split into train and test with 80% and 20% respectively. | aaaksenova/BioQA_taskB_SQuAD | [
"region:us"
]
| 2022-12-23T08:27:27+00:00 | {} | 2022-12-23T13:26:48+00:00 |
b66d3067995ce6c98c75ee8107f561ff662f60fc | # Dataset Card for "clinic-kitchen_and_dining"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fathyshalab/clinic-kitchen_and_dining | [
"region:us"
]
| 2022-12-23T08:40:22+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66661.34844444445, "num_examples": 787}, {"name": "test", "num_bytes": 28629.651555555556, "num_examples": 338}], "download_size": 0, "dataset_size": 95291.0}} | 2022-12-24T15:35:03+00:00 |
d8c23f1a209f4826b24c16ef74a6ede512ce5cd0 | huggingface-projects/bot-fight-data | [
"license:mit",
"region:us"
]
| 2022-12-23T10:14:50+00:00 | {"license": "mit"} | 2023-08-14T07:18:31+00:00 |
|
5d66cb5ece2af653afa51845ce7569d27e42a612 | mystgg/ru-wikipedia | [
"license:mit",
"region:us"
]
| 2022-12-23T10:19:40+00:00 | {"license": "mit"} | 2022-12-23T10:20:31+00:00 |
|
eabe9d73f896ae0a06c1117d8f03d51733216f19 |
Dataset homepage: https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
The purpose of hosting the archive is to play with the original files. The archive was generated using [this Colab Notebook](https://colab.research.google.com/gist/sayakpaul/98f9ff3bd258a5c1107898422447b581/scratchpad.ipynb). | sayakpaul/pokemon-blip-original-version | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2022-12-23T12:43:19+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-12-24T06:09:24+00:00 |
f422dacfd91adb5a4614eb3b6495c560158519eb | # Dataset Card for "fiszki-ocr-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zombely/fiszki-ocr-train | [
"region:us"
]
| 2022-12-23T13:03:25+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 354017910.0, "num_examples": 85}, {"name": "validation", "num_bytes": 56459717.0, "num_examples": 14}], "download_size": 410390428, "dataset_size": 410477627.0}} | 2022-12-23T13:06:29+00:00 |
37d1e1db8ba7d98725658cb5931d75aa01a4e346 | # Dataset Card for "bug-16718038814382"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | severo/bug-16718038814382 | [
"region:us"
]
| 2022-12-23T13:58:02+00:00 | {"dataset_info": {"features": [{"name": "a", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24, "num_examples": 3}], "download_size": 579, "dataset_size": 24}} | 2022-12-23T13:58:05+00:00 |
38d9ea2378b9f2be6ee85b96aecacf9ba1a03b51 | # Dataset Card for "bug-16718056078062"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | severo/bug-16718056078062 | [
"region:us"
]
| 2022-12-23T14:26:48+00:00 | {"dataset_info": {"features": [{"name": "a", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24, "num_examples": 3}], "download_size": 579, "dataset_size": 24}} | 2022-12-23T14:26:52+00:00 |
d22379e10086f2762bf1e700d5b1d3a1134f6b88 | # Dataset Card for "tortas"
Note that when using PyTorch's transforms that these images are 4-channel images. The last channel is all 1's and can be ignored.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | morgan/tortas | [
"region:us"
]
| 2022-12-23T14:35:00+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 79653203.0, "num_examples": 37}], "download_size": 79658169, "dataset_size": 79653203.0}} | 2022-12-23T16:23:05+00:00 |
bc637f792a8013c7ec950f9727327b2d1a5bb68e | Den4ikAI/squad_interpreted_with_negative | [
"license:mit",
"region:us"
]
| 2022-12-23T15:53:43+00:00 | {"license": "mit"} | 2023-01-25T16:54:54+00:00 |
|
02c75dddedc641c5e1c14e333986a9f56e498e79 | # Mirror of billsum train split
Mirror with parquet files on hub, as downloading billsum data files from Google drive causes errors in distributed training. | DebateLabKIT/billsum_train | [
"region:us"
]
| 2022-12-23T19:44:12+00:00 | {} | 2022-12-24T12:41:29+00:00 |
76413fa635a613d6ddd4c6d3b9b0b3aa86ca20d9 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jonathang/dreambooth-hackathon-images | [
"region:us"
]
| 2022-12-23T21:41:14+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1488165.0, "num_examples": 4}], "download_size": 1489345, "dataset_size": 1488165.0}} | 2022-12-27T19:34:42+00:00 |
ae9634ce61a784076139c0de8fd84f255ef23313 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Verne/dreambooth-hackathon-images | [
"region:us"
]
| 2022-12-23T22:36:01+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 828898.0, "num_examples": 20}], "download_size": 827203, "dataset_size": 828898.0}} | 2022-12-23T22:36:14+00:00 |
bb735fbf00009266d05de19461c39bf0d785f6ba | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: florenceGundy/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@florenceGundy](https://huggingface.co/florenceGundy) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-a52a81-2596378857 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-23T23:38:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "florenceGundy/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-23T23:40:45+00:00 |
0ffe46328f958c3b090294792567ad6fa0781af3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: florenceGundy/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@florenceGundy](https://huggingface.co/florenceGundy) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-56a1bc-2596578858 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-23T23:38:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "florenceGundy/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-23T23:40:54+00:00 |
76943f92923ba6a201677e2ace477ed270f3cbe5 |
Tweets from accounts labeled as bots and non-bots | kearney/tweetbotornot2 | [
"license:mit",
"region:us"
]
| 2022-12-24T02:29:38+00:00 | {"license": "mit"} | 2022-12-24T02:35:26+00:00 |
28fa0f231b13a65e08b568af28c6f4637d18d971 | Unfaithful/Generationtr | [
"license:creativeml-openrail-m",
"region:us"
]
| 2022-12-24T02:55:03+00:00 | {"license": "creativeml-openrail-m"} | 2022-12-24T02:55:04+00:00 |
|
882d060cc29e95c9abaf0aaaf18a1d681e7634d1 | jaybarca/fengzikai | [
"region:us"
]
| 2022-12-24T03:28:19+00:00 | {} | 2022-12-24T03:41:22+00:00 |
|
dac8d12982efda4b410a5492aab33affbd780596 | # Dataset Card for "financial_news_sentiment_mixte_with_phrasebank_75"
This is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators.
In addition I added ~2000 articles of Canadian news where sentiment was validated manually.
The dataset also include a column topic which contains one of the following value:
* acquisition
* other
* quaterly financial release
* appointment to new position
* dividend
* corporate update
* drillings results
* conference
* share repurchase program
* grant of stocks
This was generated automatically using a zero-shot classification model and **was not** reviewed manually.
## References
Original dataset is available here:
[https://huggingface.co/datasets/financial_phrasebank]
| Jean-Baptiste/financial_news_sentiment_mixte_with_phrasebank_75 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-sa-3.0",
"region:us"
]
| 2022-12-24T03:49:34+00:00 | {"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "pretty_name": "financial_news_sentiment_mixte_with_phrasebank_75", "dataset_info": {"splits": [{"name": "test", "num_examples": 785}, {"name": "train", "num_examples": 4446}]}, "tags": []} | 2022-12-29T03:19:16+00:00 |
6b2b09672129e280c0c9da97ab58154e9d535e6b | Please check out [https://github.com/intfloat/SimKGC/blob/main/scripts/download_wikidata5m.sh](https://github.com/intfloat/SimKGC/blob/main/scripts/download_wikidata5m.sh) on how to download this dataset.
| intfloat/wikidata5m | [
"region:us"
]
| 2022-12-24T06:30:03+00:00 | {} | 2022-12-24T07:00:03+00:00 |
8eb178419c5701c9a3ea9c697988f5968ddc5a21 | # Dataset Card for "LLM_Description_Vocab_opt_Multimodal_Fatima_opt_175b_downstream_tasks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/LLM_Description_Vocab_opt_Multimodal_Fatima_opt_175b_downstream_tasks | [
"region:us"
]
| 2022-12-24T07:44:00+00:00 | {"dataset_info": {"features": [{"name": "vocab", "dtype": "string"}, {"name": "descriptions", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 696475, "num_examples": 3426}], "download_size": 381428, "dataset_size": 696475}} | 2022-12-24T07:44:05+00:00 |
c6e736fd78c65d3cc96b5bfaf68736cf478be95d | BuyKlonopin/BuyKlonopinOnline | [
"license:bigscience-openrail-m",
"region:us"
]
| 2022-12-24T09:04:04+00:00 | {"license": "bigscience-openrail-m"} | 2022-12-24T09:04:04+00:00 |
|
7b0565845cbf29b098bc68fd30bac93c87af1b8e | # Dataset Card for "ade20k-panoptic-demo-imagefolder"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nielsr/ade20k-panoptic-demo-imagefolder | [
"region:us"
]
| 2022-12-24T09:05:57+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "string"}, {"name": "segments_info", "list": [{"name": "id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "int64"}, {"name": "iscrowd", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 88157.0, "num_examples": 10}, {"name": "validation", "num_bytes": 67914.0, "num_examples": 10}], "download_size": 151843, "dataset_size": 156071.0}} | 2022-12-24T09:06:07+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.