sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
77bcbae3e9a4460c19a2aeb5203e7c9286063c5a | poopat/gad | [
"license:unknown",
"region:us"
] | 2022-10-02T13:55:23+00:00 | {"license": "unknown"} | 2022-10-02T14:30:17+00:00 |
|
ce9bbc3b105b6344f3ce3f8e626893190d7211a0 | Lin0106/0 | [
"region:us"
] | 2022-10-02T16:36:02+00:00 | {} | 2022-10-02T16:46:14+00:00 |
|
d13f750950ca7a5cf0f2931a6e315b0ea3fc30e3 | # To download:
- from datasets import load_dataset
- uz_dev = load_dataset("Sanatbek/uzbek-kazakh-parallel-corpora", split="train[:13373]") (*10%*)
- uz_test = load_dataset("Sanatbek/uzbek-kazakh-parallel-corpora", split="train[13374:40120]") (*20%*)
- uz_train = load_dataset("Sanatbek/uzbek-kazakh-parallel-corpora", split="train[40121:]") (*70%*) | Sanatbek/uzbek-kazakh-parallel-corpora | [
"doi:10.57967/hf/1748",
"region:us"
] | 2022-10-02T17:43:18+00:00 | {} | 2023-08-02T21:27:43+00:00 |
1d5ff60f05d41aecea1ef85b472802dbfcc912e0 | Aeromesi/Aeromesi | [
"license:gpl-2.0",
"region:us"
] | 2022-10-02T18:02:39+00:00 | {"license": "gpl-2.0"} | 2022-10-02T18:02:39+00:00 |
|
116f94359b7479e58f21e746b3ab6a301c756275 |
---
annotations_creators:
- expert-generated
language:
- lse
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: LSE_eSaude_UVIGO_OSLWL
size_categories:
- n<1K
source_datasets:
- original
tags:
- sign spotting
- sign language recognition
- lse
task_categories:
- other
task_ids: []
# Dataset Card for LSE_eSaude_UVIGO_OSLWL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| mvazquez/LSE_eSaude_UVIGO_OSLWL | [
"region:us"
] | 2022-10-02T18:30:38+00:00 | {} | 2022-10-02T18:35:04+00:00 |
d58dec86dc1e680d142ec8e108ed48d06da35188 |
---
annotations_creators:
- expert-generated
language:
- lse
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name:
- LSE_eSaude_UVIGO_MSSL
size_categories:
- n<1K
source_datasets:
- original
tags:
- sign spotting
- sign language recognition
- lse
task_categories:
- other
task_ids: []
# Dataset Card for LSE_eSaude_UVIGO_MSSL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| mvazquez/LSE_eSaude_UVIGO_MSSL | [
"region:us"
] | 2022-10-02T18:48:09+00:00 | {} | 2022-10-02T21:17:37+00:00 |
6de8dd2b91461cce9dced4559e570b72c042bb13 |
please use the following code to load data:
```python
# start data loading
!git lfs install
!git clone https://huggingface.co/datasets/nlp-guild/non-linear-classification
def load_dataset(path='dataset.npy'):
"""
:return:
f_and_xs: numpy array of size [sample_number, channels, sample_length]
label_0, label_1, label_2: one-hot encodes of size [sample_number, number_bins]
"""
r = np.load(path, allow_pickle=True).item()
f_and_xs = r['f_and_xs']
label_0 = r['l_0']
label_1 = r['l_1']
label_2 = r['l_2']
return f_and_xs, label_0, label_1, label_2
f_and_xs, label_0, label_1, label_2 = load_dataset('/content/Nonlinear-System-Identification-with-Deep-Learning/dataset.npy')
# end data loading
```
| nlp-guild/non-linear-classification | [
"license:mit",
"region:us"
] | 2022-10-02T19:13:16+00:00 | {"license": "mit"} | 2023-04-14T11:49:37+00:00 |
e93ef8a7d61d58ce27df4f12bfa62f4f804b3029 |
Approx 144K tweets about iPhone 14 | Kkoustubh/iPhone14Tweets | [
"license:cc",
"region:us"
] | 2022-10-02T19:31:17+00:00 | {"license": "cc"} | 2022-10-02T19:33:12+00:00 |
127bfedcd5047750df5ccf3a12979a47bfa0bafa |
The first 10K elements of [The Pile](https://pile.eleuther.ai/), useful for debugging models trained on it. See the [HuggingFace page for the full Pile](https://huggingface.co/datasets/the_pile) for more info. Inspired by [stas' great resource](https://huggingface.co/datasets/stas/openwebtext-10k) doing the same for OpenWebText | NeelNanda/pile-10k | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | 2022-10-02T19:59:26+00:00 | {"license": "bigscience-bloom-rail-1.0"} | 2022-10-14T20:27:22+00:00 |
4f3d39bcb6e59ebe0d744d4d4a42f947c84a6d04 | illorg/illodata | [
"license:gpl",
"region:us"
] | 2022-10-02T20:16:02+00:00 | {"license": "gpl"} | 2022-10-02T20:34:52+00:00 |
|
b0f26da4cf74e72ac9e6e1d8532a6b9abbe13b81 | dxs | doorfromenchumto/Zuzulinda | [
"region:us"
] | 2022-10-02T21:29:55+00:00 | {} | 2022-10-08T22:12:36+00:00 |
d03328df89b03b1f314feaaea42d8879621cfc3a | Fedeya/me | [
"license:unknown",
"region:us"
] | 2022-10-02T22:15:22+00:00 | {"license": "unknown"} | 2022-10-02T22:16:51+00:00 |
|
813bd03cd6e07d9bd8d7333896ad5d40abb95ea9 |
# Dataset Card for "Balanced COPA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://balanced-copa.github.io/](https://balanced-copa.github.io/)
- **Repository:** [Balanced COPA](https://github.com/Balanced-COPA/Balanced-COPA)
- **Paper:** [When Choosing Plausible Alternatives, Clever Hans can be Clever](https://aclanthology.org/D19-6004/)
- **Point of Contact:** [@pkavumba](https://github.com/pkavumba)
### Dataset Summary
Bala-COPA: An English language Dataset for Training Robust Commonsense Causal Reasoning Models
The Balanced Choice of Plausible Alternatives dataset is a benchmark for training machine learning models that are robust to superficial cues/spurious correlations. The dataset extends the COPA dataset(Roemmele et al. 2011) with mirrored instances that mitigate against token-level superficial cues in the original COPA answers. The superficial cues in the original COPA datasets result from an unbalanced token distribution between the correct and the incorrect answer choices, i.e., some tokens appear more in the correct choices than the incorrect ones. Balanced COPA equalizes the token distribution by adding mirrored instances with identical answer choices but different labels.
The details about the creation of Balanced COPA and the implementation of the baselines are available in the paper.
Balanced COPA language en
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
- English
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"id": 1,
"premise": "My body cast a shadow over the grass.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": false,
}
{
"id": 1001,
"premise": "The garden looked well-groomed.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": true,
}
```
### Data Fields
The data fields are the same among all splits.
#### en
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `id`: a `int32` feature.
- `mirrored`: a `bool` feature.
### Data Splits
| validation | test |
| ---------: | ---: |
| 1,000 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{kavumba-etal-2019-choosing,
title = "When Choosing Plausible Alternatives, Clever Hans can be Clever",
author = "Kavumba, Pride and
Inoue, Naoya and
Heinzerling, Benjamin and
Singh, Keshav and
Reisert, Paul and
Inui, Kentaro",
booktitle = "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-6004",
doi = "10.18653/v1/D19-6004",
pages = "33--42",
abstract = "Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA. However, recent work found that many improvements in benchmarks of natural language understanding are not due to models learning the task, but due to their increasing ability to exploit superficial cues, such as tokens that occur more often in the correct answer than the wrong one. Are BERT{'}s and RoBERTa{'}s good performance on COPA also caused by this? We find superficial cues in COPA, as well as evidence that BERT exploits these cues.To remedy this problem, we introduce Balanced COPA, an extension of COPA that does not suffer from easy-to-exploit single token cues. We analyze BERT{'}s and RoBERTa{'}s performance on original and Balanced COPA, finding that BERT relies on superficial cues when they are present, but still achieves comparable performance once they are made ineffective, suggesting that BERT learns the task to a certain degree when forced to. In contrast, RoBERTa does not appear to rely on superficial cues.",
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [@pkavumba](https://github.com/pkavumba) for adding this dataset.
| pkavumba/balanced-copa | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|copa",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-02T23:33:09+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|copa"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "pretty_name": "BCOPA"} | 2022-10-02T23:39:01+00:00 |
22070db560e13c40e8035108e3f965dc86243273 | Nithiwat/claimbuster | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-03T01:01:28+00:00 | {"license": "cc-by-sa-4.0"} | 2022-10-03T01:19:55+00:00 |
|
349a71353fd5868fb90b593ef09e311379da498a |
# Dataset Card for The Stack

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Changelog](#changelog)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use it](#how-to-use-it)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Terms of Use for The Stack](#terms-of-use-for-the-stack)
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** https://arxiv.org/abs/2211.15533
- **Leaderboard:** N/A
- **Point of Contact:** [email protected]
### Changelog
|Release|Description|
|-|-|
|v1.0| Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. **Note:** Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 3TB in size. |
|v1.1| The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming languages was increased from 30 to 358 languages. Also opt-out request submitted by 15.11.2022 were excluded from this verison of the dataset. The resulting near-deduplicated dataset is 6TB in size.|
|v1.2| Opt-out request submitted by 09.02.2023 were excluded from this verison of the dataset as well as initially flagged malicious files (not exhaustive).|
### Dataset Summary
The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.
### Supported Tasks and Leaderboards
The Stack is a pre-training dataset for creating code LLMs. Code LLMs can be used for a wide variety of downstream tasks such as code completion from natural language descriptions ([HumanEval](https://huggingface.co/datasets/openai_humaneval), [MBPP](https://huggingface.co/datasets/mbpp)), documentation generation for individual functions ([CodeSearchNet](https://huggingface.co/datasets/code_search_net)), and auto-completion of code snippets ([HumanEval-Infilling](https://github.com/openai/human-eval-infilling)). However, these downstream evaluation benchmarks are outside the scope of The Stack.
### Languages
The following natural languages appear in the comments and docstrings from files in the dataset: EN, ZH, FR, PT, ES, RU, DE, KO, JA, UZ, IT, ID, RO, AR, FA, CA, HU, ML, NL, TR, TE, EL, EO, BN, LV, GL, PL, GU, CEB, IA, KN, SH, MK, UR, SV, LA, JKA, MY, SU, CS, MN. This kind of data is essential for applications such as documentation generation and natural-language-to-code translation.
The dataset contains **358 programming languages**. The full list can be found [here](https://huggingface.co/datasets/bigcode/the-stack/blob/main/programming-languages.json).
````
"assembly", "batchfile", "c++", "c", "c-sharp", "cmake", "css", "dockerfile", "fortran", "go", "haskell", "html", "java",
"javascript", "julia", "lua", "makefile", "markdown", "perl", "php", "powershell", "python", "ruby", "rust",
"scala", "shell", "sql", "tex", "typescript", "visual-basic"
`````
### How to use it
```python
from datasets import load_dataset
# full dataset (3TB of data)
ds = load_dataset("bigcode/the-stack", split="train")
# specific language (e.g. Dockerfiles)
ds = load_dataset("bigcode/the-stack", data_dir="data/dockerfile", split="train")
# dataset streaming (will only download the data as needed)
ds = load_dataset("bigcode/the-stack", streaming=True, split="train")
for sample in iter(ds): print(sample["content"])
```
## Dataset Structure
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.
### Data Fields
- `content` (string): the content of the file.
- `size` (integer): size of the uncompressed file.
- `lang` (string): the programming language.
- `ext` (string): file extension
- `avg_line_length` (float): the average line-length of the file.
- `max_line_length` (integer): the maximum line-length of the file.
- `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters.
- `hexsha` (string): unique git hash of file
- `max_{stars|forks|issues}_repo_path` (string): path to file in repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_name` (string): name of repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_head_hexsha` (string): hexsha of repository head
- `max_{stars|forks|issues}_repo_licenses` (string): licenses in repository
- `max_{stars|forks|issues}_count` (integer): number of `{stars|forks|issues}` in repository
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event
### Data Splits
The dataset has no splits and all data is loaded as train split by default. If you want to setup a custom train-test split beware that dataset contains a lot of near-duplicates which can cause leakage into the test split.
## Dataset Creation
### Curation Rationale
One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible.
### Source Data
#### Initial Data Collection and Normalization
220.92M active GitHub repository names were collected from the event archives published between January 1st, 2015 and March 31st, 2022 on [GHArchive](https://gharchive.org/). Only 137.36M of these repositories were public and accessible on GitHub – others were not accessible as they had been deleted by their owners. 51.76B files were downloaded from the public repositories on GitHub between November 2021 and June 2022. 5.28B files were unique. The uncompressed size of all stored files is 92.36TB.
The list of programming language extensions is taken from this [list](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) (also provided in Appendix C of the paper).
Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication. To find near-duplicates, MinHash with 256 permutations of all documents was computed in linear time. Locality Sensitive Hashing was used to find the clusters of duplicates. Jaccard Similarities were computed inside these clusters to remove any false positives and with a similarity threshold of 0.85. Roughly 40% of permissively licensed files were (near-)duplicates. See section 3 of the paper for further details.
The following are not stored:
- Files that cannot contribute to training code: binary, empty, could not be decoded
- Files larger than 1MB
- The excluded file extensions are listed in Appendix B of the paper.
##### License detection
Permissive licenses have minimal restrictions on how the software can be copied, modified, and redistributed. The full list of licenses can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/licenses.json).
GHArchive contained the license information for approximately 12% of the collected repositories. For the remaining repositories, [go-license-detector](https://github.com/src-d/go-license-detector) was run to detect the most likely SPDX license identifier. The detector did not detect a license for ~81% of the repositories, in which case the repository was excluded from the dataset.
A file was included in the safe license dataset if at least one of the repositories containing the file had a permissive license.
#### Who are the source language producers?
The source (code) language producers are users of GitHub that created unique repository names between January 1st, 2015, and March 31st, 2022.
### Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to [email protected].
The PII pipeline for this dataset is still a work in progress (see this [issue](https://github.com/bigcode-project/admin/issues/9) for updates). Researchers that wish to contribute to the anonymization pipeline of the project can apply to join [here](https://www.bigcode-project.org/docs/about/join/). Developers with source code in the dataset can request to have it removed [here](https://www.bigcode-project.org/docs/about/ip/) (proof of code contribution is required).
### Opting out of The Stack
We are giving developers the ability to have their code removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools.
You can check if your code is in The Stack with the following ["Am I In The Stack?" Space](https://huggingface.co/spaces/bigcode/in-the-stack). If you'd like to have your data removed from the dataset follow the [instructions on GitHub](https://github.com/bigcode-project/opt-out-v2).
## Considerations for Using the Data
### Social Impact of Dataset
The Stack is an output of the BigCode Project. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on the responsible development of LLMs for code.
With the release of The Stack, we aim to increase access, reproducibility, and transparency of code LLMs in the research community. Work to de-risk and improve on the implementation of ethical best practices of code LLMs is conducted in various BigCode working groups. The Legal, Ethics, and Governance working group has explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. This work is ongoing as of October 25th, 2022.
We expect code LLMs to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical software could become easier to maintain as professional developers are guided by code-generating systems on how to write more robust and efficient code. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market.
A broader impact analysis relating to Code LLMs can be found in section 7 of this [paper](https://arxiv.org/abs/2107.03374). An in-depth risk assessments for Code LLMs can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157).
### Discussion of Biases
The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks,
as the comments within the code may contain harmful or offensive language, which could be learned by the models.
Widely adopted programming languages like C and Javascript are overrepresented compared to niche programming languages like Julia and Scala. Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer.
Roughly 40 natural languages are present in docstrings and comments with English being the most prevalent. In python files, it makes up ~96% of the dataset.
For further information on data analysis of the Stack, see this [repo](https://github.com/bigcode-project/bigcode-analysis).
### Other Known Limitations
One of the current limitations of The Stack is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues.
The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware.
To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)). The accuracy of license attribution is limited by the accuracy of GHArchive and go-license-detector. Any mistakes should be reported to BigCode Project for review and follow-up as needed.
## Additional Information
### Dataset Curators
1. Harm de Vries, ServiceNow Research, [email protected]
2. Leandro von Werra, Hugging Face, [email protected]
### Licensing Information
The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack/blob/main/licenses.json).
### Citation Information
```
@article{Kocetkov2022TheStack,
title={The Stack: 3 TB of permissively licensed source code},
author={Kocetkov, Denis and Li, Raymond and Ben Allal, Loubna and Li, Jia and Mou,Chenghao and Muñoz Ferrandis, Carlos and Jernite, Yacine and Mitchell, Margaret and Hughes, Sean and Wolf, Thomas and Bahdanau, Dzmitry and von Werra, Leandro and de Vries, Harm},
journal={Preprint},
year={2022}
}
```
### Contributions
[More Information Needed]
## Terms of Use for The Stack
The Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must include these Terms of Use and require users to agree to it.
| bigcode/the-stack | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:other",
"arxiv:2211.15533",
"arxiv:2107.03374",
"arxiv:2207.14157",
"region:us"
] | 2022-10-03T02:34:54+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "The-Stack", "extra_gated_prompt": "## Terms of Use for The Stack\n\nThe Stack dataset is a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:\n1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n2. The Stack is regularly updated to enact validated data removal requests. By clicking on \"Access repository\", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset\u2019s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.\n3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.\n\nBy clicking on \"Access repository\" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.\n ", "extra_gated_fields": {"Email": "text", "I have read the License and agree with its terms": "checkbox"}} | 2023-04-13T11:15:50+00:00 |
8773546d3ab6da40447285488e8383c70b3e4a08 | lzkhit/images | [
"license:apache-2.0",
"region:us"
] | 2022-10-03T03:24:51+00:00 | {"license": "apache-2.0"} | 2022-10-03T03:26:50+00:00 |
|
109751ea3525d065be4b3684519b9636c95baaf9 | Kint/oph | [
"region:us"
] | 2022-10-03T04:56:37+00:00 | {} | 2022-10-03T05:08:18+00:00 |
|
f6e0fcd3a4171e2a9a2656f58cb50b9aba5fbba5 |
# Dataset Card for BLURB
## Dataset Description
- **Homepage:** https://microsoft.github.io/BLURB/tasks.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
BLURB is a collection of resources for biomedical natural language processing.
In general domains, such as newswire and the Web, comprehensive benchmarks and
leaderboards such as GLUE have greatly accelerated progress in open-domain NLP.
In biomedicine, however, such resources are ostensibly scarce. In the past,
there have been a plethora of shared tasks in biomedical NLP, such as
BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These
efforts have played a significant role in fueling interest and progress by the
research community, but they typically focus on individual tasks. The advent of
neural language models, such as BERT provides a unifying foundation to leverage
transfer learning from unlabeled text to support a wide range of NLP
applications. To accelerate progress in biomedical pretraining strategies and
task-specific methods, it is thus imperative to create a broad-coverage
benchmark encompassing diverse biomedical tasks.
Inspired by prior efforts toward this direction (e.g., BLUE), we have created
BLURB (short for Biomedical Language Understanding and Reasoning Benchmark).
BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP
applications, as well as a leaderboard for tracking progress by the community.
BLURB includes thirteen publicly available datasets in six diverse tasks. To
avoid placing undue emphasis on tasks with many available datasets, such as
named entity recognition (NER), BLURB reports the macro average across all tasks
as the main score. The BLURB leaderboard is model-agnostic. Any system capable
of producing the test predictions using the same training and development data
can participate. The main goal of BLURB is to lower the entry barrier in
biomedical NLP and help accelerate progress in this vitally important field for
positive societal and human impact.
This implementation contains a subset of 5 tasks as of 2022.10.06, with their original train, dev, and test splits.
## Citation Information
```
@article{gu2021domain,
title = {
Domain-specific language model pretraining for biomedical natural
language processing
},
author = {
Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
Jianfeng and Poon, Hoifung
},
year = 2021,
journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
publisher = {ACM New York, NY},
volume = 3,
number = 1,
pages = {1--23}
}
```
| bigbio/blurb | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-10-03T05:19:58+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BLURB", "bigbio_language": ["English"], "bigbio_license_shortname": "MIXED", "homepage": "https://microsoft.github.io/BLURB/tasks.html", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:27:48+00:00 |
3e395fa9420dd2c3389e541b073228ed2a8e3f9e | RAMILISON/rajo | [
"license:apache-2.0",
"region:us"
] | 2022-10-03T06:04:27+00:00 | {"license": "apache-2.0"} | 2022-10-03T12:15:44+00:00 |
|
203a83696a54ec5a17ec6698884c32164f7293ee | kenobi/SDO | [
"task_categories:image-classification",
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_categories:image-to-text",
"task_categories:image-to-image",
"task_categories:visual-question-answering",
"task_categories:zero-shot-image-classification",
"task_ids:multi-class-image-classification",
"task_ids:semantic-segmentation",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:other",
"size_categories:n<1K",
"license:other",
"space research",
"solar research",
"heliophysics",
"region:us"
] | 2022-10-03T08:36:47+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": [], "license": ["other"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["image-classification", "object-detection", "image-segmentation", "image-to-text", "image-to-image", "visual-question-answering", "zero-shot-image-classification"], "task_ids": ["multi-class-image-classification", "semantic-segmentation", "image-captioning"], "pretty_name": "SDO", "tags": ["space research", "solar research", "heliophysics"]} | 2022-10-03T08:46:38+00:00 |
|
51730024741118e61660cd16fae1c046c053a769 | SaintSpirit/bermejo | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-03T09:34:29+00:00 | {"license": "cc-by-nc-4.0"} | 2022-10-03T10:15:13+00:00 |
|
7737764fa91d8c2ad4da96327338df189a5e2806 | SaintSpirit/images | [
"license:cc-by-nd-4.0",
"region:us"
] | 2022-10-03T10:18:39+00:00 | {"license": "cc-by-nd-4.0"} | 2022-10-03T11:11:41+00:00 |
|
ad333614b54a9181abc95fdfd09688a0ab2cf4c7 | Konst12/1 | [
"region:us"
] | 2022-10-03T13:23:06+00:00 | {} | 2022-12-05T21:23:18+00:00 |
|
01b1e39f74eed0e5af70b76140f91f0311aa7ade | VENF/me | [
"license:openrail",
"region:us"
] | 2022-10-03T16:44:01+00:00 | {"license": "openrail"} | 2022-10-03T16:47:18+00:00 |
|
1831a38f305741dc7790ace1f2142838a18c8a56 | Santta/SantasDB | [
"license:afl-3.0",
"region:us"
] | 2022-10-03T16:58:31+00:00 | {"license": "afl-3.0"} | 2022-10-04T13:46:20+00:00 |
|
a5599d85efeeffeab2c512a02ced7c7a5bae05f2 |
# Dataset Card for Lex Fridman Podcasts Dataset
This dataset is sourced from Andrej Karpathy's [Lexicap website](https://karpathy.ai/lexicap/) which contains English transcripts of Lex Fridman's wonderful podcast episodes. The transcripts were generated using OpenAI's large-sized [Whisper model]("https://github.com/openai/whisper") | RamAnanth1/lex-fridman-podcasts | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:summarization",
"task_ids:sentiment-analysis",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"region:us"
] | 2022-10-03T17:24:26+00:00 | {"language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-classification", "text-generation", "summarization"], "task_ids": ["sentiment-analysis", "dialogue-modeling", "language-modeling"], "pretty_name": "Lex Fridman Podcasts ", "lexicap": ["found"]} | 2022-12-17T21:39:56+00:00 |
e4c709f0aabb87a51a775f9319d3ee919cbe40d6 | LiveEvil/deepfacev1 | [
"license:mit",
"region:us"
] | 2022-10-03T18:05:59+00:00 | {"license": "mit"} | 2022-10-03T18:05:59+00:00 |
|
3f828259fe9e47479be8a275f40368d37c42b1e7 | Pre-trained models and other files associated with the RNNG BrainScore repo. Check out the GitHub at https://github.com/benlipkin/rnng | benlipkin/rnng-brainscore | [
"license:mit",
"region:us"
] | 2022-10-03T18:36:07+00:00 | {"license": "mit"} | 2022-11-09T15:02:11+00:00 |
1c4a8df556d922bfc7f65cbfdf2b3d9804a69052 | Hamiltonhog/Dalap | [
"license:other",
"region:us"
] | 2022-10-03T18:48:27+00:00 | {"license": "other"} | 2022-10-03T19:10:33+00:00 |
|
bcb26e69554574d87cc8286ed42b028183d0fc55 |
# Dataset Card for PP4AV
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Dataset folder](#folder)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Baseline Model](#baseline-model)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/khaclinh/pp4av
- **Repository:** https://github.com/khaclinh/pp4av
- **Baseline model:** https://huggingface.co/spaces/khaclinh/self-driving-anonymization
- **Paper:** [PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving]
- **Point of Contact:** [email protected]
### Dataset Summary
PP4AV is the first public dataset with faces and license plates annotated with driving scenarios. P4AV provides 3,447 annotated driving images for both faces and license plates. For normal camera data, dataset sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. The images in PP4AV were sampled from 6 European cities at various times of day, including nighttime. This dataset use the fisheye images from the WoodScape dataset to select 244 images from the front, rear, left, and right cameras for fisheye camera data. PP4AV dataset can be used as a benchmark suite (evaluating dataset) for data anonymization models in autonomous driving.
### Languages
English
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The objective of PP4AV is to build a benchmark dataset that can be used to evaluate face and license plate detection models for autonomous driving. For normal camera data, we sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. We focus on sampling data in urban areas rather than highways in order to provide sufficient samples of license plates and pedestrians. The images in PP4AV were sampled from **6** European cities at various times of day, including nighttime. The source data from 6 cities in European was described as follow:
- `Paris`: This subset contains **1450** images of the car driving down a Parisian street during the day. The video frame rate is 30 frames per second. The video is longer than one hour. We cut a shorter video for sampling and annotation. The original video can be found at the following URL:
URL: [paris_youtube_video](https://www.youtube.com/watch?v=nqWtGWymV6c)
- `Netherland day time`: This subset consists of **388** images of Hague, Amsterdam city in day time. The image of this subset are sampled from the bellow original video:
URL: [netherland_youtube_video](https://www.youtube.com/watch?v=Xuo4uCZxNrE)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
- `Netherland night time`: This subset consists of **824** images of Hague, Amsterdam city in night time sampled by the following original video:
URL: [netherland_youtube_video](https://www.youtube.com/watch?v=eAy9eHsynhM)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour.
- `Switzerland`: This subset consists of **372** images of Switzerland sampled by the following video:
URL: [switzerland_youtube_video](https://www.youtube.com/watch?v=0iw5IP94m0Q)
The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than one hour.
- `Zurich`: This subset consists of **50** images of Zurich city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
- `Stuttgart`: This subset consists of **69** images of Stuttgart city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
- `Strasbourg`: This subset consists of **50** images of Strasbourg city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3)
We use the fisheye images from the WoodScape dataset to select **244** images from the front, rear, left, and right cameras for fisheye camera data.
The source of fisheye data for sampling is located at WoodScape's [Fisheye images](https://woodscape.valeo.com/download).
In total, **3,447** images were selected and annotated in PP4AV.
### Annotations
#### Annotation process
Annotators annotate facial and license plate objects in images. For facial objects, bounding boxes are defined by all detectable human faces from the forehead to the chin to the ears. Faces were labelled with diverse sizes, skin tones, and faces partially obscured by a transparent material, such as a car windshield. For license plate objects, bounding boxes consists of all recognizable license plates with high variability, such as different sizes, countries, vehicle types (motorcycle, automobile, bus, truck), and occlusions by other vehicles. License plates were annotated for vehicles involved in moving traffic. To ensure the quality of annotation, there are two-step process for annotation. In the first phase, two teams of annotators will independently annotate identical image sets. After their annotation output is complete, a merging method based on the IoU scores between the two bounding boxes of the two annotations will be applied. Pairs of annotations with IoU scores above a threshold will be merged and saved as a single annotation. Annotated pairs with IoU scores below a threshold will be considered conflicting. In the second phase, two teams of reviewers will inspect the conflicting pairs of annotations for revision before a second merging method similar to the first is applied. The results of these two phases will be combined to form the final annotation. All work is conducted on the CVAT tool https://github.com/openvinotoolkit/cvat.
#### Who are the annotators?
Vantix Data Science team
### Dataset Folder
The `data` folder contains below files:
- `images.zip`: contains all preprocessed images of PP4AV dataset. In this `zip` file, there are bellow folder included:
`fisheye`: folder contains 244 fisheye images in `.png` file type
`zurich`: folder contains 244 fisheye images in `.png` file type
`strasbourg`: folder contains 244 fisheye images in `.png` file type
`stuttgart`: folder contains 244 fisheye images in `.png` file type
`switzerland`: folder contains 244 fisheye images in `.png` file type
`netherlands_day`: folder contains 244 fisheye images in `.png` file type
`netherlands_night`: folder contains 244 fisheye images in `.png` file type
`paris`: folder contains 244 fisheye images in `.png` file type
- `annotations.zip`: contains annotation data corresponding to `images.zip` data. In this file, there are bellow folder included:
`fisheye`: folder contains 244 annotation `.txt` file type for fisheye image following `yolo v1.1` format.
`zurich`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `zurich` subset.
`strasbourg`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `strasbourg` subset.
`stuttgart`: folder contains 69 file `.txt` annotation following `yolo v1.1` format, which corresponding to 69 images file of `stuttgart` subset.
`switzerland`: folder contains 372 file `.txt` annotation following `yolo v1.1` format, which corresponding to 372 images file of `switzerland` subset.
`netherlands_day`: folder contains 388 file `.txt` annotation following `yolo v1.1` format, which corresponding to 388 images file of `netherlands_day` subset.
`netherlands_night`: folder contains 824 file `.txt` annotation following `yolo v1.1` format, which corresponding to 824 images file of `netherlands_night` subset.
`paris`: folder contains 1450 file `.txt` annotation following `yolo v1.1` format, which corresponding to 1450 images file of `paris` subset.
- `soiling_annotations.zip`: contain raw annotation data without filtering. The folder structure stored in this file is similar to format of `annotations.zip`.
### Personal and Sensitive Information
[More Information Needed]
## Dataset Structure
### Data Instances
A data point comprises an image and its face and license plate annotations.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1920x1080 at 0x19FA12186D8>, 'objects': {
'bbox': [
[0 0.230078 0.317081 0.239062 0.331367],
[1 0.5017185 0.0306425 0.5185935 0.0410975],
[1 0.695078 0.0710145 0.7109375 0.0863355],
[1 0.4089065 0.31646 0.414375 0.32764],
[0 0.1843745 0.403416 0.201093 0.414182],
[0 0.7132 0.3393474 0.717922 0.3514285]
]
}
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `objects`: a dictionary of face and license plate bounding boxes present on the image
- `bbox`: the bounding box of each face and license plate (in the [yolo](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#yolo) format). Basically, each row in annotation `.txt` file for each image `.png` file consists of data in format: `<object-class> <x_center> <y_center> <width> <height>`:
- `object-class`: integer number of object from 0 to 1, where 0 indicate face object, and 1 indicate licese plate object
- `x_center`: normalized x-axis coordinate of the center of the bounding box.
`x_center = <absolute_x_center> / <image_width>`
- `y_center`: normalized y-axis coordinate of the center of the bounding box.
`y_center = <absolute_y_center> / <image_height>`
- `width`: normalized width of the bounding box.
`width = <absolute_width> / <image_width>`
- `height`: normalized wheightdth of the bounding box.
`height = <absolute_height> / <image_height>`
- Example lines in YOLO v1.1 format `.txt' annotation file:
`1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
`
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Baseline Model
Pretrained weight and demo of baseline model are available in [self-driving-anonymization huggingface spaces](https://huggingface.co/spaces/khaclinh/self-driving-anonymization)
### Dataset Curators
Linh Trinh
### Licensing Information
[Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
@article{PP4AV2022,
title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving},
author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year = {2023}
}
```
### Contributions
Thanks to [@khaclinh](https://github.com/khaclinh) for adding this dataset.
| khaclinh/pp4av | [
"task_categories:object-detection",
"task_ids:face-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:cc-by-nc-nd-4.0",
"license-plate-detection",
"region:us"
] | 2022-10-03T19:28:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended"], "task_categories": ["object-detection"], "task_ids": ["face-detection"], "pretty_name": "PP4AV", "tags": ["license-plate-detection"]} | 2022-10-26T03:19:10+00:00 |
89493c7c73b6191186ce3f5ea92a3b9d2398cc91 | amynechiban/chibano | [
"license:openrail",
"region:us"
] | 2022-10-03T19:36:05+00:00 | {"license": "openrail"} | 2022-10-03T19:46:48+00:00 |
|
c9f2154be6ce8a9b9c3b6dd00b05ca4117a5e400 | # AutoTrain Dataset for project: fake-news
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fake-news.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_author": "Brett Macdonald",
"feat_published": "2016-10-28T00:58:00.000+03:00",
"feat_title": "breaking hillary just lost the black vote trump is going all the way to the white house",
"text": "dean james americas freedom fighters \nlast week the pentagon issued a defense department directive that allows department of defense dd personnel to carry firearms and employ deadly force while performing official duties \nthe defense department has been working on changing the gunfree zones on domestic military basis for several years in light of the deadly shootings at military sites in recent years \nmilitarycom reports that the directive also provides detailed guidance to the services for permitting soldiers sailors airmen marines and coast guard personnel to carry privately owned firearms on dod property it authorizes commanders and aboveto grant permission to dod personnel requesting to carry a privately owned firearm concealed or open carry on dod property for a personal protection purpose not related to performance of an official duty or status \nthe directive also makes clear that dod will consider further changes to grant standard authorizations for other dod personnel who are trained in the scaled use of force or who have been previously qualified to use a governmentissued firearm to carry a firearm in the performance of official duties on dod property this would allow dod with certain combat training to carry firearms without going through the additional step of making application with a commander \nkim smith at conservative tribune notes that the policy was a response to an nrabacked provision in the national defense authorization act that required the defense department to allow more service members to carry firearms on base \nit is a good first step in that it recognizes personal protection is a valid issue for service members but there are many roadblocks in the way of making that option available nra spokeswoman jennifer baker told the washington free beacon \nthose wishing to apply for permission to carry a firearm must be at least years old and meet all federal state and local laws the directive said \nit would appear that the pentagon saw no problems with implementing a policy for which presidentelect donald trump has expressed support \npresidentelect donald trump ran on removing gunfree zones from military bases on july breitbart news reported that trump pledged to end the gunfree scenarios for us troops by mandating that soldiers remain armed and on alert at our military bases \nthe immediate institution of this directive probably left president barack obama incensed but he undoubtedly realized that there was nothing he could do to prevent its implementation in a couple of months anyway and thats good news because it works to ensure the safety of our troops which should always be a priority \nlet us know what you think about this in the comments below \ngod bless",
"feat_language": "english",
"feat_site_url": "americasfreedomfighters.com",
"feat_main_img_url": "http://www.americasfreedomfighters.com/wp-content/uploads/2016/10/22-1.jpg",
"feat_type": "bs",
"target": 0,
"feat_title_without_stopwords": "breaking hillary lost black vote trump going way white house",
"feat_text_without_stopwords": "dean james americas freedom fighters last week pentagon issued defense department directive allows department defense dd personnel carry firearms employ deadly force performing official duties defense department working changing gunfree zones domestic military basis several years light deadly shootings military sites recent years militarycom reports directive also provides detailed guidance services permitting soldiers sailors airmen marines coast guard personnel carry privately owned firearms dod property authorizes commanders aboveto grant permission dod personnel requesting carry privately owned firearm concealed open carry dod property personal protection purpose related performance official duty status directive also makes clear dod consider changes grant standard authorizations dod personnel trained scaled use force previously qualified use governmentissued firearm carry firearm performance official duties dod property would allow dod certain combat training carry firearms without going additional step making application commander kim smith conservative tribune notes policy response nrabacked provision national defense authorization act required defense department allow service members carry firearms base good first step recognizes personal protection valid issue service members many roadblocks way making option available nra spokeswoman jennifer baker told washington free beacon wishing apply permission carry firearm must least years old meet federal state local laws directive said would appear pentagon saw problems implementing policy presidentelect donald trump expressed support presidentelect donald trump ran removing gunfree zones military bases july breitbart news reported trump pledged end gunfree scenarios us troops mandating soldiers remain armed alert military bases immediate institution directive probably left president barack obama incensed undoubtedly realized nothing could prevent implementation couple months anyway thats good news works ensure safety troops always priority let us know think comments god bless",
"feat_hasImage": 1.0
},
{
"feat_author": "Joel Ross Taylor",
"feat_published": "2016-10-26T22:46:37.443+03:00",
"feat_title": "no title",
"text": "announcement \nthe wrh server continues to be under intense attack by hillarys tantrum squad \nbut the site keeps bouncing back so if during the day you cannot connect wait a minute or two and try again thank you for your patience it is obvious the bad guys are in a state of total panic to act like this thought for the day we seek peace knowing that peace is the climate of freedom dwight d eisenhower your random dhs monitored phrase of the day dera \npaid advertising at what really happened may not represent the views and opinions of this website and its contributors no endorsement of products and services advertised is either expressed or implied \nhillary the spy updated info \nlet us start with an historical fact treason and betrayal by the highest levels is a common feature of history whether it is judas vs jesus brutus vs julius caesar benedict arnold the rosenbergs jonathan pollard aldrich ames robert hanssen it is just a fact of life it does happen \nback in when bill clinton was running for reelection he authorized the transfer of highly sensitive technology to china this technology had military applications and allowed china to close the gap in missile performance with the united states the transfers were opposed and severely criticized by the defense department \nat the same time bill clinton was transferring this technology to china huge donations began to pour into his reelection campaign from the us companies allowed to sell the technology to china and from american citizens of chinese descent the fact that they were us citizens allowed them to donate to political campaigns but it later emerged that they were acting as conduits for cash coming in from asian sources including chinese intelligence agencies the scandal eventually became known as chinagate \njohn huang \na close associate of indonesian industrialist james riady huang initially was appointed deputy secretary of commerce in by however he moved to the democratic national committee where he generated hundreds of thousands of dollars in illegal contributions from foreign sources huang later pleaded guilty to one felony count of campaign finance violations \ncharlie trie \nlike john huang trie raised hundreds of thousands of dollars in illegal contributions from foreign sources to democratic campaign entities he was a regular white house visitor and arranged meetings of foreign operators with clinton including one who was a chinese arms dealer his contribution to clintons legal defense fund was returned after it was found to have been largely funded by asian interests trie was convicted of violating campaign finance laws in \none of tries main sources of cash was chinese billionaire ng lap seng according to a senate report ng lap seng had connections to the chinese government seng was arrested in over an unrelated bribery case but this gave investigators the opportunity to question seng about the chinagate scandal former united nations general assembly president john ashe was also caught in the bribery case and was about to testify to the links between the clintons and seng when he was found dead that very morning initially reported as having died from a heart attack johns throat had obviously been crushed at that point the official story changed to him accidentally dropping a barbell on his own throat \nng lap seng with the clintons \njohnny chung \ngave more than to the democratic national committee prior to the campaign but it was returned after officials learned it came from illegal foreign sources chung later told a special senate committee investigating clinton campaign fundraising that of his contributions came from individuals in chinese intelligence chung pleaded guilty to bank fraud tax evasion and campaign finance violations \nchinagate documented by judicial watch was uncovered by judicial watch founder larry klayman technology companies allegedly made donations of millions of dollars to various democratic party entities including president bill clintons reelection campaign in return for permission to sell hightech secrets to china bernard schwartz and his loral space communication ltd later allegedly helped china to identify the cause of a rocket failure thereby advancing chinas missile program and threatening us national security according to records \nthis establishes a history of the clintons treating us secrets as their own personal property and selling them to raise money for campaigns \nis history repeating itself it appears so \nlet us consider a private email server with weak security at least one known totally open access point no encryption at all and outside the control and monitoring systems of the us government on which are parked many of the nations most closely guarded secrets as well as those of the united nations and other foreign governments it is already established that hillarys email was hacked one hacker named guccifer provided copies of emails to russia today which published them",
"feat_language": "english",
"feat_site_url": "westernjournalism.com",
"feat_main_img_url": "http://static.westernjournalism.com/wp-content/uploads/2016/10/earnest-obama.jpg",
"feat_type": "bias",
"target": 1,
"feat_title_without_stopwords": "title",
"feat_text_without_stopwords": "maggie hassan left kelly ayotte hassan declares victory us senate race ayotte paul feelynew hampshire union leader update gov maggie hassan declared shes new hampshires us senate race unseating republican sen kelly ayotteduring hastilycalled press conference outside state house hassan said shes ahead enough votes survive returns outstanding towns lefti proud stand next united states senator new hampshire hassan said cheers large group supporters led congresswoman annie kuster hassans husband tomthe twoterm governor said hadnt spoken ayotteits clear maintained lead race hassan saidsen ayotte issued brief statement hassans event concede deferred secretary state bill gardners final resultsthis closely contested race beginning look forward results announced secretary state ensuring every vote counted race received historic level interest ayotte saidhassan said called congratulate govelect chris sununu newfields republican vowed work together smooth transition power states corner officewith percent vote counted hassan led ayotte nashua republican votes much less percent two voting precincts left reporta recount statewide race seems like real possibility margin small enough ayotte pay earlier story follows concord republican incumbent sen kelly ayotte told supporters early wednesday feeling really upbeat chances one closely watched expensive us senate races country wasnt ready claim victory democratic challenger gov maggie hassan earn return washington representing granite stateat ayotte took podium grappone conference center concord address supporters victory party dead heat hassan percent percent votes votes percent precincts state reportingjoe excited see tonight said ayotte feel really upbeat tonightayotte went thank supporters next gov sununuwe know hard worked grateful humbled fact would believe us right upbeat race believe strongly fact want every vote come talk every vote matters every person matters stategov hassan said race close call campaign maintained vote lead according numbers compiled staffwe still small sustainable lead saidhassan told crowd number smaller towns yet report numbers confident lead would hold campaign said numbers show hassan vote ayottes percent vote campaign said numbers include results big communities associated press yet count like salem derry lebanon portsmouth cities manchester nashua concord included hassan numbersthe governor headed home night urged supporters go home get sleepelection day marked end long campaign cycle granite state kicked nine months ago presidential primaries nine months ago didnt let final ballots cast around pm tuesdaythe ayottehassan contest expensive political race ever new hampshire million spent took center stage cycle alongside presidential race republican nominee donald trump democratic nominee hillary clinton cementing new hampshires status battleground state four electoral votes grabs race one half dozen around us closely watched tuesday outcome likely playing part deciding republicans retain control senate democrats regain majority lost two years agoit great night republicans new hampshire across country said nh gop chair jennifer horn new hampshire know republicans stand together republicans fight together win",
"feat_hasImage": 1.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_author": "Value(dtype='string', id=None)",
"feat_published": "Value(dtype='string', id=None)",
"feat_title": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_language": "Value(dtype='string', id=None)",
"feat_site_url": "Value(dtype='string', id=None)",
"feat_main_img_url": "Value(dtype='string', id=None)",
"feat_type": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['Fake', 'Real'], id=None)",
"feat_title_without_stopwords": "Value(dtype='string', id=None)",
"feat_text_without_stopwords": "Value(dtype='string', id=None)",
"feat_hasImage": "Value(dtype='float64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1639 |
| valid | 411 |
| suresh-subramanian/autotrain-data-fake-news | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-10-03T21:01:24+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-03T21:04:02+00:00 |
1dfeec0b7c8bf55da1c38d1ea6cf3c0aadb09dc8 | UnknownBot/Tobys-Lively-Tunes | [
"license:gpl-3.0",
"region:us"
] | 2022-10-04T01:07:30+00:00 | {"license": "gpl-3.0"} | 2022-10-04T01:24:01+00:00 |
|
67fbe7fc1598373f0f81d1b5192ac8d424f0e94a | zcw607/dj_piggy | [
"license:mit",
"region:us"
] | 2022-10-04T02:42:10+00:00 | {"license": "mit"} | 2022-10-04T03:45:39+00:00 |
|
63ecef72baeb35040818d19a131488f55a63ea48 | alexoamber/testing | [
"license:afl-3.0",
"region:us"
] | 2022-10-04T05:14:07+00:00 | {"license": "afl-3.0"} | 2022-10-04T05:15:07+00:00 |
|
5157534de40e425f8c719ee7bac7c51cdebfcef9 | ImageIN/IA_loaded | [
"region:us"
] | 2022-10-04T05:38:09+00:00 | {} | 2022-10-13T08:07:00+00:00 |
|
b656e8f759a97d1f6fd94b89936954b5e8e537ac |
收集中文书籍总计14363本,用于学术研究和工业生产使用,书籍持续收录中,参与贡献请移步[代码仓库](https://github.com/shjwudp/shu)。
The dataset constructed from Chinese books. Books are being collected continuously. Please move to [code warehouse](https://github.com/shjwudp/shu) to contribute.
| shjwudp/shu | [
"language:zh",
"license:cc-by-4.0",
"region:us"
] | 2022-10-04T05:49:05+00:00 | {"language": "zh", "license": "cc-by-4.0"} | 2023-06-18T09:58:32+00:00 |
90cf503c83a03984f6f2a6750639c7f58a0833d5 | 5381607451 oya clne eke lidar rp gahala dennam kiyala gaththa echchrama thama oyata mathaka athi uwa | chamuditha/szasw | [
"region:us"
] | 2022-10-04T06:41:03+00:00 | {} | 2022-10-04T06:41:39+00:00 |
37d2038fba67e97e448c9e984ade602ae317c533 | azuu/testing | [
"license:apache-2.0",
"region:us"
] | 2022-10-04T06:51:52+00:00 | {"license": "apache-2.0"} | 2022-10-04T06:52:12+00:00 |
|
568933cffcef96b919c9f8ddc566fb85f74e5a86 | youngdicey/rico-raw | [
"license:openrail",
"region:us"
] | 2022-10-04T07:03:28+00:00 | {"license": "openrail"} | 2022-10-05T07:58:04+00:00 |
|
cf0b22332314a937e9dc8a1957b21725430bb41d | detection-datasets/coco | [
"task_categories:object-detection",
"language:en",
"region:us"
] | 2022-10-04T07:13:16+00:00 | {"language": ["en"], "task_categories": ["object-detection"]} | 2023-03-15T15:11:53+00:00 |
|
3924c3784902b37fa27585e6f58905369e79a451 | youngdicey/sample | [
"license:openrail",
"region:us"
] | 2022-10-04T07:38:25+00:00 | {"license": "openrail"} | 2022-10-05T04:26:39+00:00 |
|
95dd4ccbc4bc09e0c99e374f99a1e15f444acaf5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: ChuVN/longformer-base-4096-finetuned-squad2-length-1024-128window
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758608 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:25:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "ChuVN/longformer-base-4096-finetuned-squad2-length-1024-128window", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:42:23+00:00 |
fdbfb7c35482e11fbaeab6d4905b2679327a19b3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Palak/xlm-roberta-base_squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758610 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:25:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Palak/xlm-roberta-base_squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:28:54+00:00 |
0be5cbce4748125b4f1860a3dc90f2c89a852321 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: SupriyaArun/bert-base-uncased-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758611 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:25:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "SupriyaArun/bert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:28:50+00:00 |
e0fb058fe85c2d3d6f9135ff6400df42f646fdda | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: SiraH/bert-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758612 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:25:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "SiraH/bert-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:28:54+00:00 |
83e32e07ee901f7b6153c3e0d607086b71f0c5cc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Paul-Vinh/bert-base-multilingual-cased-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758613 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:26:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Paul-Vinh/bert-base-multilingual-cased-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:29:08+00:00 |
9f16d72a6db9ff9c6d67d92f2cea347459a05362 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Palak/microsoft_deberta-base_squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758614 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:26:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Palak/microsoft_deberta-base_squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:30:00+00:00 |
02c96fa323a571539245c92428dc06a7e0da1cd1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Sangita/distilbert-base-uncased-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758615 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:26:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Sangita/distilbert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:28:41+00:00 |
5e638585d3d005f0fbbcc40471618f1d39c25c1a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Plimpton/distilbert-base-uncased-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758616 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:26:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Plimpton/distilbert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:28:48+00:00 |
fcc9866d0841a9d1eac276f2a53d0d9c5c584ad3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Neulvo/bert-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-8571ec-1652758617 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T08:26:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Neulvo/bert-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-04T08:29:27+00:00 |
b0e4884ec8ea6ef65e22f7409f3962060c4ae169 | annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: "Libert\xE8 d'action by Heiner Goebbels"
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: []
| Gr3en/Goebbels_Liberte_daction | [
"region:us"
] | 2022-10-04T08:31:47+00:00 | {} | 2022-10-04T08:44:35+00:00 |
6dfd409e61158ef29abfcc842f77136121575c8c |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| Besedo/artificial_weapon | [
"task_categories:image-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"size_categories:1K<n<10K",
"weapon",
"image",
"region:us"
] | 2022-10-04T09:02:28+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": [], "license": [], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "artificial_weapon", "tags": ["weapon", "image"]} | 2022-10-04T11:24:34+00:00 |
012100364d6f85657f203a149120dfd46943e366 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Top news headline in finance from bbc-news
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Sentiment label: Using threshold from -2% to 2% for neutral (2), below is negative (1) and above is positive (3)
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | Tidrael/test2 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-10-04T10:19:10+00:00 | {"annotations_creators": [], "language_creators": ["machine-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "bussiness-news", "tags": []} | 2022-10-06T07:14:54+00:00 |
2417b2b6d421eb45345432b59fcee4f0ba35f076 |
# Historic book pages illustration weak annotations | ImageIN/unlabelled_IA_with_snorkel_labels | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:machine-generated",
"license:cc0-1.0",
"lam",
"historic",
"glam",
"books",
"region:us"
] | 2022-10-04T11:17:59+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": [], "language": [], "license": ["cc0-1.0"], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "Historic book pages illustration weak annotations", "tags": ["lam", "historic", "glam", "books"]} | 2022-10-13T08:06:42+00:00 |
692431acca4c0d0083707c61252653fa457f227a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@[email protected]](https://huggingface.co/[email protected]) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c793f9-1654758678 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T11:33:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test", "dataset_config": "mathemakitten--winobias_antistereotype_test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-04T11:40:31+00:00 |
2bdc305542cdfaf196143c28f00cf9f9c870765a | EronSamez/teste | [
"region:us"
] | 2022-10-04T11:36:28+00:00 | {} | 2023-08-25T22:25:45+00:00 |
|
9e8bc5b80994625bda48f6d10257b2d79469e6be | # AutoTrain Dataset for project: person-name-validity1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project person-name-validity1.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"divided"
],
"tags": [
0
]
},
{
"tokens": [
"nusrat"
],
"tags": [
1
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(num_classes=2, names=['0', '2'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2499 |
| valid | 499 |
| Akshata/autotrain-data-person-name-validity1 | [
"language:en",
"region:us"
] | 2022-10-04T12:12:39+00:00 | {"language": ["en"]} | 2022-10-04T12:13:38+00:00 |
e03f152bc4c6ad1a74ecd728fb9c01cf38efa9ff | aoflaherty/Pics | [
"license:unknown",
"region:us"
] | 2022-10-04T12:17:49+00:00 | {"license": "unknown"} | 2022-10-04T12:17:49+00:00 |
|
b098d049037728423a0928c2eca5669064524e8e | Kamag/e | [
"license:unknown",
"region:us"
] | 2022-10-04T12:24:03+00:00 | {"license": "unknown"} | 2022-10-04T12:25:39+00:00 |
|
5bb5797edde8cbc1aedbe527c52694e883055a3c | etiennefd/codex_borgia | [
"license:wtfpl",
"region:us"
] | 2022-10-04T13:16:41+00:00 | {"license": "wtfpl"} | 2022-10-04T15:10:47+00:00 |
|
409da38303be68e455e9e15082be7313dcbbcfa6 | OlegKit/RND2 | [
"license:artistic-2.0",
"region:us"
] | 2022-10-04T13:22:12+00:00 | {"license": "artistic-2.0"} | 2022-10-04T13:22:12+00:00 |
|
bb04f34922b6bdd2a6fce9eb6872610cfb65a25b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: NimaBoscarino/dog_food
* Dataset: lewtun/dog_food
* Config: lewtun--dog_food
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@NimaBoscarino](https://huggingface.co/NimaBoscarino) for evaluating this model. | autoevaluate/autoeval-eval-lewtun__dog_food-lewtun__dog_food-7ca01a-1656458705 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-04T13:45:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lewtun/dog_food"], "eval_info": {"task": "image_multi_class_classification", "model": "NimaBoscarino/dog_food", "metrics": [], "dataset_name": "lewtun/dog_food", "dataset_config": "lewtun--dog_food", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}} | 2022-10-04T13:46:03+00:00 |
8d258c6b7fb4cb8d29e9b2aa6de7f586c943cb9e |
# Dataset Card for Dataset Name
This dataset contains historic newspapers from [Europeana](https://pro.europeana.eu/page/iiif#download). In total the collection has ~32 Billion tokens. Documentation for this dataset is a WIP.
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
To download the full dataset using the `Datasets` library you can do the following
```python
from datasets import load_dataset
dataset = load_dataset("biglam/europeana_newspapers")
```
You can also access a subset based on language or decade ranges using the following function.
```python
from typing import List, Optional, Literal, Union
from huggingface_hub import hf_hub_url, list_repo_files
LanguageOption = Literal[
"et",
"pl",
"sr",
"ru",
"sv",
"no_language_found",
"ji",
"hr",
"el",
"uk",
"fr",
"fi",
"de",
"multi_language",
]
def get_files_for_lang_and_years(
languages: Union[None, List[LanguageOption]] = None,
min_year: Optional[int] = None,
max_year: Optional[int] = None,
):
files = list_repo_files("biglam/europeana_newspapers", repo_type="dataset")
parquet_files = [f for f in files if f.endswith(".parquet")]
parquet_files_filtered_for_lang = [
f for f in parquet_files if any(lang in f for lang in ["uk", "fr"])
]
filtered_files = [
f
for f in parquet_files
if (min_year is None or min_year <= int(f.split("-")[1].split(".")[0]))
and (max_year is None or int(f.split("-")[1].split(".")[0]) <= max_year)
]
return [
hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset")
for f in filtered_files
]
```
This function takes a list of language codes, and a min, max value for decades you want to include. You can can use this function to get the URLs for files you want to download from the Hub:
```python
ds = load_dataset("parquet", data_files=get_files_for_lang_and_years(['fr']), num_proc=4)
```
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | biglam/europeana_newspapers | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"language:de",
"language:fr",
"language:el",
"language:et",
"language:fi",
"language:hr",
"language:ji",
"language:pl",
"language:ru",
"language:sr",
"language:sv",
"language:uk",
"newspapers",
"lam",
"OCR",
"region:us"
] | 2022-10-04T15:31:37+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["de", "fr", "el", "et", "fi", "hr", "ji", "pl", "ru", "sr", "sv", "uk"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Europeana Newspapers ", "tags": ["newspapers", "lam", "OCR"]} | 2024-01-31T10:20:48+00:00 |
50445adde34cf95f7b91bd76d5c271b924d5403a | dgrnd4/animals-10 | [
"license:other",
"region:us"
] | 2022-10-04T15:39:10+00:00 | {"license": "other"} | 2022-10-04T15:45:42+00:00 |
|
d522cda043a8d3dce0fbb6b0a0fe7b1f38e2dccb |
# Dataset Card for OLM September 2022 Wikipedia
Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from a September 2022 Wikipedia snapshot. | olm/olm-wikipedia-20220920 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
] | 2022-10-04T16:05:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM September 2022 Wikipedia", "tags": ["pretraining", "language modelling", "wikipedia", "web"]} | 2022-10-18T18:18:25+00:00 |
b92cb55de6dbc580e22f0500daa842d45cd61c16 | prueba | irving777/prueba2022 | [
"region:us"
] | 2022-10-04T16:36:15+00:00 | {} | 2022-10-04T22:52:17+00:00 |
f2f4dc390dd81b0f0189c57b014bf9e9b2d6d276 | smallpinktinyturtle/taemo | [
"license:unknown",
"region:us"
] | 2022-10-04T16:39:27+00:00 | {"license": "unknown"} | 2022-10-04T16:44:22+00:00 |
|
70ed48c9bd02fc5a602b3239fde83b40e35d31cf | Boryak/Images | [
"license:openrail",
"region:us"
] | 2022-10-04T17:00:20+00:00 | {"license": "openrail"} | 2022-10-04T17:01:04+00:00 |
|
57e9c34c85ad91712cfbf21ad43d20e4c9f0190c |
# Dataset Card for Wikipedia
This repo is a fork of the original Hugging Face Wikipedia repo [here](https://huggingface.co/datasets/wikipedia).
The difference is that this fork does away with the need for `apache-beam`, and this fork is very fast if you have a lot of CPUs on your machine.
It will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.
This fork is also used in the [OLM Project](https://github.com/huggingface/olm-datasets) to pull and process up-to-date wikipedia snapshots.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
The articles are parsed using the ``mwparserfromhell`` tool, and we use ``multiprocess`` for parallelization.
To load this dataset you need to install these first:
```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
load_dataset("olm/wikipedia", language="en", date="20220920")
```
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
## Dataset Structure
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
| olm/wikipedia | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:aa",
"language:ab",
"language:ace",
"language:af",
"language:ak",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arc",
"language:arz",
"language:as",
"language:ast",
"language:atj",
"language:av",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bjn",
"language:bm",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bug",
"language:bxr",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:ch",
"language:cho",
"language:chr",
"language:chy",
"language:ckb",
"language:co",
"language:cr",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:din",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fr",
"language:frp",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gom",
"language:gor",
"language:got",
"language:gu",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:he",
"language:hi",
"language:hif",
"language:ho",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ii",
"language:ik",
"language:ilo",
"language:inh",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kbp",
"language:kg",
"language:ki",
"language:kj",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:krc",
"language:ks",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lbe",
"language:lez",
"language:lfn",
"language:lg",
"language:li",
"language:lij",
"language:lmo",
"language:ln",
"language:lo",
"language:lrc",
"language:lt",
"language:ltg",
"language:lv",
"language:lzh",
"language:mai",
"language:mdf",
"language:mg",
"language:mh",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mus",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:na",
"language:nah",
"language:nan",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:ng",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:nrf",
"language:nso",
"language:nv",
"language:ny",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pdc",
"language:pfl",
"language:pi",
"language:pih",
"language:pl",
"language:pms",
"language:pnb",
"language:pnt",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:rmy",
"language:rn",
"language:ro",
"language:ru",
"language:rue",
"language:rup",
"language:rw",
"language:sa",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:se",
"language:sg",
"language:sgs",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:ss",
"language:st",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:ta",
"language:tcy",
"language:tdt",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tn",
"language:to",
"language:tpi",
"language:tr",
"language:ts",
"language:tt",
"language:tum",
"language:tw",
"language:ty",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:za",
"language:zea",
"language:zh",
"language:zu",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | 2022-10-04T17:07:56+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["aa", "ab", "ace", "af", "ak", "als", "am", "an", "ang", "ar", "arc", "arz", "as", "ast", "atj", "av", "ay", "az", "azb", "ba", "bar", "bcl", "be", "bg", "bh", "bi", "bjn", "bm", "bn", "bo", "bpy", "br", "bs", "bug", "bxr", "ca", "cbk", "cdo", "ce", "ceb", "ch", "cho", "chr", "chy", "ckb", "co", "cr", "crh", "cs", "csb", "cu", "cv", "cy", "da", "de", "din", "diq", "dsb", "dty", "dv", "dz", "ee", "el", "eml", "en", "eo", "es", "et", "eu", "ext", "fa", "ff", "fi", "fj", "fo", "fr", "frp", "frr", "fur", "fy", "ga", "gag", "gan", "gd", "gl", "glk", "gn", "gom", "gor", "got", "gu", "gv", "ha", "hak", "haw", "he", "hi", "hif", "ho", "hr", "hsb", "ht", "hu", "hy", "ia", "id", "ie", "ig", "ii", "ik", "ilo", "inh", "io", "is", "it", "iu", "ja", "jam", "jbo", "jv", "ka", "kaa", "kab", "kbd", "kbp", "kg", "ki", "kj", "kk", "kl", "km", "kn", "ko", "koi", "krc", "ks", "ksh", "ku", "kv", "kw", "ky", "la", "lad", "lb", "lbe", "lez", "lfn", "lg", "li", "lij", "lmo", "ln", "lo", "lrc", "lt", "ltg", "lv", "lzh", "mai", "mdf", "mg", "mh", "mhr", "mi", "min", "mk", "ml", "mn", "mr", "mrj", "ms", "mt", "mus", "mwl", "my", "myv", "mzn", "na", "nah", "nan", "nap", "nds", "ne", "new", "ng", "nl", "nn", "no", "nov", "nrf", "nso", "nv", "ny", "oc", "olo", "om", "or", "os", "pa", "pag", "pam", "pap", "pcd", "pdc", "pfl", "pi", "pih", "pl", "pms", "pnb", "pnt", "ps", "pt", "qu", "rm", "rmy", "rn", "ro", "ru", "rue", "rup", "rw", "sa", "sah", "sat", "sc", "scn", "sco", "sd", "se", "sg", "sgs", "sh", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "srn", "ss", "st", "stq", "su", "sv", "sw", "szl", "ta", "tcy", "tdt", "te", "tg", "th", "ti", "tk", "tl", "tn", "to", "tpi", "tr", "ts", "tt", "tum", "tw", "ty", "tyv", "udm", "ug", "uk", "ur", "uz", "ve", "vec", "vep", "vi", "vls", "vo", "vro", "wa", "war", "wo", "wuu", "xal", "xh", "xmf", "yi", "yo", "yue", "za", "zea", "zh", "zu"], "license": ["cc-by-sa-3.0", "gfdl"], "multilinguality": ["multilingual"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Wikipedia", "config_names": ["20220301.aa", "20220301.ab", "20220301.ace", "20220301.ady", "20220301.af", "20220301.ak", "20220301.als", "20220301.am", "20220301.an", "20220301.ang", "20220301.ar", "20220301.arc", "20220301.arz", "20220301.as", "20220301.ast", "20220301.atj", "20220301.av", "20220301.ay", "20220301.az", "20220301.azb", "20220301.ba", "20220301.bar", "20220301.bat-smg", "20220301.bcl", "20220301.be", "20220301.be-x-old", "20220301.bg", "20220301.bh", "20220301.bi", "20220301.bjn", "20220301.bm", "20220301.bn", "20220301.bo", "20220301.bpy", "20220301.br", "20220301.bs", "20220301.bug", "20220301.bxr", "20220301.ca", "20220301.cbk-zam", "20220301.cdo", "20220301.ce", "20220301.ceb", "20220301.ch", "20220301.cho", "20220301.chr", "20220301.chy", "20220301.ckb", "20220301.co", "20220301.cr", "20220301.crh", "20220301.cs", "20220301.csb", "20220301.cu", "20220301.cv", "20220301.cy", "20220301.da", "20220301.de", "20220301.din", "20220301.diq", "20220301.dsb", "20220301.dty", "20220301.dv", "20220301.dz", "20220301.ee", "20220301.el", "20220301.eml", "20220301.en", "20220301.eo", "20220301.es", "20220301.et", "20220301.eu", "20220301.ext", "20220301.fa", "20220301.ff", "20220301.fi", "20220301.fiu-vro", "20220301.fj", "20220301.fo", "20220301.fr", "20220301.frp", "20220301.frr", "20220301.fur", "20220301.fy", "20220301.ga", "20220301.gag", "20220301.gan", "20220301.gd", "20220301.gl", "20220301.glk", "20220301.gn", "20220301.gom", "20220301.gor", "20220301.got", "20220301.gu", "20220301.gv", "20220301.ha", "20220301.hak", "20220301.haw", "20220301.he", "20220301.hi", "20220301.hif", "20220301.ho", "20220301.hr", "20220301.hsb", "20220301.ht", "20220301.hu", "20220301.hy", "20220301.ia", "20220301.id", "20220301.ie", "20220301.ig", "20220301.ii", "20220301.ik", "20220301.ilo", "20220301.inh", "20220301.io", "20220301.is", "20220301.it", "20220301.iu", "20220301.ja", "20220301.jam", "20220301.jbo", "20220301.jv", "20220301.ka", "20220301.kaa", "20220301.kab", "20220301.kbd", "20220301.kbp", "20220301.kg", "20220301.ki", "20220301.kj", "20220301.kk", "20220301.kl", "20220301.km", "20220301.kn", "20220301.ko", "20220301.koi", "20220301.krc", "20220301.ks", "20220301.ksh", "20220301.ku", "20220301.kv", "20220301.kw", "20220301.ky", "20220301.la", "20220301.lad", "20220301.lb", "20220301.lbe", "20220301.lez", "20220301.lfn", "20220301.lg", "20220301.li", "20220301.lij", "20220301.lmo", "20220301.ln", "20220301.lo", "20220301.lrc", "20220301.lt", "20220301.ltg", "20220301.lv", "20220301.mai", "20220301.map-bms", "20220301.mdf", "20220301.mg", "20220301.mh", "20220301.mhr", "20220301.mi", "20220301.min", "20220301.mk", "20220301.ml", "20220301.mn", "20220301.mr", "20220301.mrj", "20220301.ms", "20220301.mt", "20220301.mus", "20220301.mwl", "20220301.my", "20220301.myv", "20220301.mzn", "20220301.na", "20220301.nah", "20220301.nap", "20220301.nds", "20220301.nds-nl", "20220301.ne", "20220301.new", "20220301.ng", "20220301.nl", "20220301.nn", "20220301.no", "20220301.nov", "20220301.nrm", "20220301.nso", "20220301.nv", "20220301.ny", "20220301.oc", "20220301.olo", "20220301.om", "20220301.or", "20220301.os", "20220301.pa", "20220301.pag", "20220301.pam", "20220301.pap", "20220301.pcd", "20220301.pdc", "20220301.pfl", "20220301.pi", "20220301.pih", "20220301.pl", "20220301.pms", "20220301.pnb", "20220301.pnt", "20220301.ps", "20220301.pt", "20220301.qu", "20220301.rm", "20220301.rmy", "20220301.rn", "20220301.ro", "20220301.roa-rup", "20220301.roa-tara", "20220301.ru", "20220301.rue", "20220301.rw", "20220301.sa", "20220301.sah", "20220301.sat", "20220301.sc", "20220301.scn", "20220301.sco", "20220301.sd", "20220301.se", "20220301.sg", "20220301.sh", "20220301.si", "20220301.simple", "20220301.sk", "20220301.sl", "20220301.sm", "20220301.sn", "20220301.so", "20220301.sq", "20220301.sr", "20220301.srn", "20220301.ss", "20220301.st", "20220301.stq", "20220301.su", "20220301.sv", "20220301.sw", "20220301.szl", "20220301.ta", "20220301.tcy", "20220301.te", "20220301.tet", "20220301.tg", "20220301.th", "20220301.ti", "20220301.tk", "20220301.tl", "20220301.tn", "20220301.to", "20220301.tpi", "20220301.tr", "20220301.ts", "20220301.tt", "20220301.tum", "20220301.tw", "20220301.ty", "20220301.tyv", "20220301.udm", "20220301.ug", "20220301.uk", "20220301.ur", "20220301.uz", "20220301.ve", "20220301.vec", "20220301.vep", "20220301.vi", "20220301.vls", "20220301.vo", "20220301.wa", "20220301.war", "20220301.wo", "20220301.wuu", "20220301.xal", "20220301.xh", "20220301.xmf", "20220301.yi", "20220301.yo", "20220301.za", "20220301.zea", "20220301.zh", "20220301.zh-classical", "20220301.zh-min-nan", "20220301.zh-yue", "20220301.zu"], "language_bcp47": ["nds-nl"]} | 2024-01-23T21:20:31+00:00 |
bffb1825f8f9590d22b375ff9423d3ce8250ced8 | Zonas/Guweiz | [
"license:afl-3.0",
"region:us"
] | 2022-10-04T19:34:16+00:00 | {"license": "afl-3.0"} | 2022-10-04T23:02:06+00:00 |
|
838f64679878d1f3dfcf46a05a56effab00022be | Anthrall/rauco | [
"license:afl-3.0",
"region:us"
] | 2022-10-04T21:04:52+00:00 | {"license": "afl-3.0"} | 2022-10-04T21:05:10+00:00 |
|
db8849fa4383e9660abb112d5d65b2b8f09fb66d | nuphantom/lionto | [
"region:us"
] | 2022-10-04T21:12:38+00:00 | {} | 2022-10-04T21:12:51+00:00 |
|
1e35d6626281b3e54bf7e16d459cee5509727f96 | nuphantom/l1 | [
"license:other",
"region:us"
] | 2022-10-04T21:13:51+00:00 | {"license": "other"} | 2022-10-04T21:14:18+00:00 |
|
9676376cf6c259964bd0864a489e951c365d6734 | Kuanchy/Kuanchy | [
"license:unknown",
"region:us"
] | 2022-10-04T21:27:44+00:00 | {"license": "unknown"} | 2022-10-04T21:30:44+00:00 |
|
3a8a362e225b794d016b6c005b13b235a796bc38 | ksang/Summoner-Statistics | [
"region:us"
] | 2022-10-04T22:16:00+00:00 | {} | 2022-10-04T22:18:10+00:00 |
|
9620d910c2e2abeba72133327991ac09921c6a50 | bongsoo/moco_eval | [
"license:apache-2.0",
"region:us"
] | 2022-10-04T22:41:56+00:00 | {"license": "apache-2.0"} | 2022-10-04T22:42:20+00:00 |
|
d0d09309628a2098055cb5c5ca1f8872fa6e0fcc | jdelcidr/garabato | [
"license:afl-3.0",
"region:us"
] | 2022-10-05T00:23:00+00:00 | {"license": "afl-3.0"} | 2022-10-05T00:23:00+00:00 |
|
514567974f1e75822a473c339415d9df2fb72753 | smallpinktinyturtle/testaud | [
"license:unknown",
"region:us"
] | 2022-10-05T03:11:27+00:00 | {"license": "unknown"} | 2022-10-05T03:55:34+00:00 |
|
a71c13073357a8fdb018f9abd0e4d6ef92d62564 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: fabriceyhc/bert-base-uncased-amazon_polarity
* Dataset: amazon_polarity
* Config: amazon_polarity
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@tts](https://huggingface.co/tts) for evaluating this model. | autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-b95081-1665358869 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T03:48:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["amazon_polarity"], "eval_info": {"task": "binary_classification", "model": "fabriceyhc/bert-base-uncased-amazon_polarity", "metrics": [], "dataset_name": "amazon_polarity", "dataset_config": "amazon_polarity", "dataset_split": "test", "col_mapping": {"text": "content", "target": "label"}}} | 2022-10-05T04:15:47+00:00 |
44f145b3b28189b11935960a93aa3e76b1e9e726 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558893 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/inverse_superglue_mixedp1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "jeffdshen/inverse_superglue_mixedp1", "dataset_config": "jeffdshen--inverse_superglue_mixedp1", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-05T04:55:33+00:00 |
1b52ca9bd1605f656a9bfe87dd52acd79f2ffe6d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558891 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:24+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/inverse_superglue_mixedp1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "jeffdshen/inverse_superglue_mixedp1", "dataset_config": "jeffdshen--inverse_superglue_mixedp1", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-05T04:34:22+00:00 |
ba0d49ac8757d6430e8154b7cce13c9fa42393ea | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558890 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/inverse_superglue_mixedp1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-125m", "metrics": [], "dataset_name": "jeffdshen/inverse_superglue_mixedp1", "dataset_config": "jeffdshen--inverse_superglue_mixedp1", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-05T04:31:30+00:00 |
2bedee7c768cc95bc5e9b0113e04ecaa05b21806 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558894 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/inverse_superglue_mixedp1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "jeffdshen/inverse_superglue_mixedp1", "dataset_config": "jeffdshen--inverse_superglue_mixedp1", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-05T05:31:15+00:00 |
b64672a495f18d07ff8fe4469ef5a97a5e1f9a53 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558892 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/inverse_superglue_mixedp1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "jeffdshen/inverse_superglue_mixedp1", "dataset_config": "jeffdshen--inverse_superglue_mixedp1", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-05T04:44:28+00:00 |
45f1ef8e327d1409ac286e62bcebe91e67b542f7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558895 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/inverse_superglue_mixedp1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "jeffdshen/inverse_superglue_mixedp1", "dataset_config": "jeffdshen--inverse_superglue_mixedp1", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-05T09:19:31+00:00 |
9df8fb24352b9e29d515ffafe9db10482bd7d886 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158899 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T04:30:05+00:00 |
0276859dac546847bbf4db06353635e291ab05bc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158897 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-125m", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T04:28:35+00:00 |
2d8ed940042912adee1646150a0cbc1219a23467 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: jeffdshen/inverse_superglue_mixedp1
* Config: jeffdshen--inverse_superglue_mixedp1
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558896 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/inverse_superglue_mixedp1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "jeffdshen/inverse_superglue_mixedp1", "dataset_config": "jeffdshen--inverse_superglue_mixedp1", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-05T15:06:53+00:00 |
77ecf3665d397078ba0a7f2d2729b6973dfbb349 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158898 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:27:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T04:29:11+00:00 |
e4d735afe1007f82b3f04157ceb4e8b7c70a73bd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158900 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:28:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T04:32:05+00:00 |
f834eafe0c7f0de1ca6654d58b8af176574593ce | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158903 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:28:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T05:10:17+00:00 |
ea7bcd45b9ebcb63ac9006de8382d96f35fa059b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158901 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:28:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T04:37:20+00:00 |
8b5898f4eafe3795b6bedcbc7b099e1873bfca94 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158902 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:28:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T04:45:21+00:00 |
8b20e9d35d175d5221b82ffcb4cacc91d0a5305b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: jeffdshen/redefine_math_test0
* Config: jeffdshen--redefine_math_test0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math_test0-jeffdshen__redefine_math-58f952-1666158904 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-05T04:28:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math_test0"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "jeffdshen/redefine_math_test0", "dataset_config": "jeffdshen--redefine_math_test0", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-05T06:01:04+00:00 |
14dc923ebb568aef15623f2d2601711bc5390e6e | matthh/gutenberg-poetry-corpus | [
"license:cc0-1.0",
"region:us"
] | 2022-10-05T05:49:37+00:00 | {"license": "cc0-1.0"} | 2022-10-05T19:44:55+00:00 |
|
8822ec4dc8a45a33b3032124dc042d1952f3630e | ardiansyah0389/microfossil | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-05T05:51:20+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-10-05T05:51:20+00:00 |
|
a1da9141a47c45e26fe8171b315baf9806fc1f79 | agak/agak | [
"license:openrail",
"region:us"
] | 2022-10-05T06:19:45+00:00 | {"license": "openrail"} | 2022-10-05T06:19:45+00:00 |
|
5e1d0468842305c4fffb06e306477f89413ee0ce |
# Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for One Piece BLIP captions
_Dataset used to train [One Piece text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_
BLIP generated captions for One piece images collected from the web. Original images were obtained from [Anime Characters](https://www.animecharactersdatabase.com) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Examples

> a man in a straw hat

> a man in a green coat holding two swords

> a man with red hair and a black coat
## Citation
If you use this dataset, please cite it as:
```
@misc{yayab2022onepiece,
author = {YaYaB},
title = {One Piece BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/YaYaB/onepiece-blip-captions/}}
}
``` | YaYaB/onepiece-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:YaYaB/onepiece-blip-captions",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-05T07:53:42+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["YaYaB/onepiece-blip-captions"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "One Piece BLIP captions", "tags": []} | 2022-10-05T09:08:34+00:00 |
a3e1687ffa83962089d122261e70b63d36ea0744 | loldunno/milk | [
"license:afl-3.0",
"region:us"
] | 2022-10-05T08:23:01+00:00 | {"license": "afl-3.0"} | 2022-10-05T08:23:01+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.