sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
ee0fefac8bae648f9a85e33f52fc39fd2fd2ddce | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/examplei
* Config: mismatch
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__examplei-mismatch-1389aa-1748961037 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-13T14:49:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/examplei"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": ["f1"], "dataset_name": "phpthinh/examplei", "dataset_config": "mismatch", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-13T15:08:31+00:00 |
55a7cf0a0b66ce56ba9c35e5a56bf52c88adfd30 |
# Dataset Card for "BanglaParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglaparaphrase](https://github.com/csebuetnlp/banglaparaphrase)
- **Paper:** [BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset](https://arxiv.org/abs/2210.05109)
- **Point of Contact:** [Najrin Sultana](mailto:[email protected])
### Dataset Summary
We present BanglaParaphrase, a high quality synthetic Bangla paraphrase dataset containing about 466k paraphrase pairs.
The paraphrases ensures high quality by being semantically coherent and syntactically diverse.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Languages
- `bengali`
## Loading the dataset
```python
from datasets import load_dataset
from datasets import load_dataset
ds = load_dataset("csebuetnlp/BanglaParaphrase")
```
## Dataset Structure
### Data Instances
One example from the `train` part of the dataset is given below in JSON format.
```
{
"source": "বেশিরভাগ সময় প্রকৃতির দয়ার ওপরেই বেঁচে থাকতেন উপজাতিরা।",
"target": "বেশিরভাগ সময়ই উপজাতিরা প্রকৃতির দয়ার উপর নির্ভরশীল ছিল।"
}
```
### Data Fields
- 'source': A string representing the source sentence.
- 'target': A string representing the target sentence.
### Data Splits
Dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code | Train | Validation | Test |
-------------- | ---------------- | ------- | ----- | ------ |
Bengali | bn | 419, 967 | 233, 31 | 233, 32 |
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Source Data
[Roar Bangla](https://roar.media/bangla)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Annotations
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Annotation process
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the annotators?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
```
@article{akil2022banglaparaphrase,
title={BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset},
author={Akil, Ajwad and Sultana, Najrin and Bhattacharjee, Abhik and Shahriyar, Rifat},
journal={arXiv preprint arXiv:2210.05109},
year={2022}
}
```
### Contributions
| csebuetnlp/BanglaParaphrase | [
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:original",
"language:bn",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"paraphrase-generation",
"arxiv:2210.05109",
"region:us"
] | 2022-10-13T15:06:21+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["bn"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100k<n<1M"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "BanglaParaphrase", "tags": ["conditional-text-generation", "paraphrase-generation"]} | 2022-11-14T15:39:43+00:00 |
816f7881391c6ee586eb9fbdb784619871fc04e2 | williambr/snowmed_signsymptom | [
"license:mit",
"region:us"
] | 2022-10-13T16:34:31+00:00 | {"license": "mit"} | 2022-10-13T16:34:49+00:00 |
|
fd35c6358fd302556f3c8d52acdd19ed8e61381e | annotations_creators:
- machine-generated
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: Whisper-Transcripts
size_categories:
- 1M<n<10M
source_datasets:
- original
tags: []
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling | Whispering-GPT/whisper-transcripts-the-verge | [
"region:us"
] | 2022-10-13T16:58:45+00:00 | {} | 2022-10-23T09:54:59+00:00 |
5a28efd1123b3a08a64878f48dd171a8a859389d | ChiangLz/zapotecojuchitan | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-13T17:40:33+00:00 | {"license": "cc-by-nc-nd-4.0"} | 2022-10-23T17:48:42+00:00 |
|
ddb7af253443c37bc559afd65936fe21a5177d15 | DimDymov/Vilmarina | [
"license:cc-by-nd-4.0",
"region:us"
] | 2022-10-13T18:03:47+00:00 | {"license": "cc-by-nd-4.0"} | 2022-10-13T18:12:38+00:00 |
|
f41838f3135528d90d7727487737421a01b7866d | # Dataset Card for "sidewalk-imagery"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dpasch01/sidewalk-imagery | [
"region:us"
] | 2022-10-13T18:11:58+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3202716.0, "num_examples": 10}], "download_size": 3192547, "dataset_size": 3202716.0}} | 2022-10-13T18:12:05+00:00 |
9a8e1119eccce3f5559d8d26538230d3a4f90f3f | # Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Kavindu99/celeb-identities | [
"region:us"
] | 2022-10-13T19:27:31+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Emilia_Clarke", "1": "Henry_Cavil", "2": "Jason_Mamoa", "3": "Sadie_Sink", "4": "Sangakkara", "5": "Zendaya"}}}}], "splits": [{"name": "train", "num_bytes": 160371.0, "num_examples": 18}], "download_size": 160832, "dataset_size": 160371.0}} | 2022-10-13T19:27:44+00:00 |
174b3afde4a8dec38e49d843fc9fc0857c4a8bd9 |
The YouTube transcriptions dataset contains technical tutorials (currently from [James Briggs](https://www.youtube.com/c/jamesbriggs), [Daniel Bourke](https://www.youtube.com/channel/UCr8O8l5cCX85Oem1d18EezQ), and [AI Coffee Break](https://www.youtube.com/c/aicoffeebreak)) transcribed using [OpenAI's Whisper](https://huggingface.co/openai/whisper-large) (large). Each row represents roughly a sentence-length chunk of text alongside the video URL and timestamp.
Note that each item in the dataset contains just a short chunk of text. For most use cases you will likely need to merge multiple rows to create more substantial chunks of text, if you need to do that, this code snippet will help:
```python
from datasets import load_dataset
# first download the dataset
data = load_dataset(
'jamescalam/youtube-transcriptions',
split='train'
)
new_data = [] # this will store adjusted data
window = 6 # number of sentences to combine
stride = 3 # number of sentences to 'stride' over, used to create overlap
for i in range(0, len(data), stride):
i_end = min(len(data)-1, i+window)
if data[i]['title'] != data[i_end]['title']:
# in this case we skip this entry as we have start/end of two videos
continue
# create larger text chunk
text = ' '.join(data[i:i_end]['text'])
# add to adjusted data list
new_data.append({
'start': data[i]['start'],
'end': data[i_end]['end'],
'title': data[i]['title'],
'text': text,
'id': data[i]['id'],
'url': data[i]['url'],
'published': data[i]['published']
})
``` | jamescalam/youtube-transcriptions | [
"task_categories:conversational",
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:visual-question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"task_ids:document-retrieval",
"task_ids:visual-question-answering",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"youtube",
"technical",
"speech to text",
"speech",
"video",
"video search",
"audio",
"audio search",
"region:us"
] | 2022-10-13T19:31:27+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["conversational", "question-answering", "text-retrieval", "visual-question-answering"], "task_ids": ["open-domain-qa", "extractive-qa", "document-retrieval", "visual-question-answering"], "pretty_name": "Youtube Transcriptions", "tags": ["youtube", "technical", "speech to text", "speech", "video", "video search", "audio", "audio search"]} | 2022-10-22T00:20:07+00:00 |
bb4424259da93902b3ec2ece55a744f23d0793d0 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
Natural Language Inference
Text Classification
### Languages
en
## Dataset Structure
### Data Instances
### Data Fields
premise:
hypothesis:
label:
### Data Splits
Evaluation: 258 samples
## Dataset Creation
### Curation Rationale
Extracting samples corresponding to different linguistics constructions of negation.
### Source Data
Geoffrey K. Pullum and Rodney Huddleston. 2002. Negation, chapter 9. Cambridge University Press.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The annotators are the authors of the papers, one of whom holds a graduate degree in linguistics.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@joey234](https://github.com/joey234) for adding this dataset. | joey234/nan-nli | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"negation",
"region:us"
] | 2022-10-13T22:16:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "nan-nli", "tags": ["negation"]} | 2022-10-13T22:18:18+00:00 |
cacce71315e1bbff74962098ea588386b63ee60c | annaludicode/ladiesInColoredWaterStyle | [
"license:artistic-2.0",
"region:us"
] | 2022-10-13T22:43:33+00:00 | {"license": "artistic-2.0"} | 2022-10-13T22:43:33+00:00 |
|
c8468b5b341979f7e59f79c048a2ab61870f6c98 |
## test | zhenzi/test | [
"region:us"
] | 2022-10-14T00:38:17+00:00 | {} | 2022-10-18T01:03:54+00:00 |
1eeb1fb9c1d9e3c8c6c9e5becd15a560e2ab29c5 |
# Dataset Card for Dicionário Português
It is a list of 53138 portuguese words with its inflections.
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/pt-inflections", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ['word', 'pos', 'forms'],
num_rows: 53138
})
})
```
Exemple:
```
remote_dataset["train"][42]
```
Output:
```
{'word': 'numeral',
'pos': 'noun',
'forms': [{'form': 'numerais', 'tags': ['plural']}]}
```
| VanessaSchenkel/pt-inflections | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:pt",
"region:us"
] | 2022-10-14T00:41:22+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["pt"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|wikipedia"], "task_categories": [], "task_ids": [], "pretty_name": "dicion\u00e1rio de portugu\u00eas", "tags": []} | 2022-11-07T03:44:23+00:00 |
2af016d62b5b4de22045d3385ff117b9c2d11ce5 |
# About Dataset
The dataset consists of data from a bunch of youtube videos ranging from videos from fastai lessons, FSDL lesson to random videos teaching something.
In total this dataset contains 600 chapter markers in youtube and contains 25, 000 lesson transcript.
This dataset can be used for NLP tasks like summarization, topic segmentation etc. You can refer to some of the models we have trained with this dataset
in [github repo link](https://github.com/ohmeow/fsdl_2022_course_project) for Full stack deep learning 2022 projects.
| recapper/Course_summaries_dataset | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"region:us"
] | 2022-10-14T03:10:12+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]} | 2022-10-25T15:03:24+00:00 |
aaaa35d10817ea9ca2550c3970aa413f9fb30bd4 | # Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bburns/celeb-identities | [
"region:us"
] | 2022-10-14T03:21:48+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Geohot", "1": "Grimes", "2": "Kanye", "3": "PG", "4": "Riva", "5": "Trump"}}}}], "splits": [{"name": "train", "num_bytes": 4350264.0, "num_examples": 18}], "download_size": 4342420, "dataset_size": 4350264.0}} | 2022-10-14T14:20:20+00:00 |
bfbba48d89b4213fa5cd9df07b675ba461d51d4f |
Dataset containing video metadata from a few tech channels, i.e.
* [James Briggs](https://youtube.com/c/JamesBriggs)
* [Yannic Kilcher](https://www.youtube.com/c/YannicKilcher)
* [sentdex](https://www.youtube.com/c/sentdex)
* [Daniel Bourke](https://www.youtube.com/channel/UCr8O8l5cCX85Oem1d18EezQ)
* [AI Coffee Break with Letitia](https://www.youtube.com/c/AICoffeeBreak)
* [Alex Ziskind](https://youtube.com/channel/UCajiMK_CY9icRhLepS8_3ug) | jamescalam/channel-metadata | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"youtube",
"video",
"video metadata",
"tech",
"science and tech",
"region:us"
] | 2022-10-14T04:29:45+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "Tech Channels Metadata", "tags": ["youtube", "video", "video metadata", "tech", "science and tech"]} | 2022-10-26T00:05:55+00:00 |
8f7568a6bea2403221f304edd9212a7d00a980a2 | ratishsp/newshead | [
"license:mit",
"region:us"
] | 2022-10-14T05:05:56+00:00 | {"license": "mit"} | 2022-10-14T06:42:08+00:00 |
|
2d78d4a8000795b3520df6d58966673ae099e912 | # Dataset Card for "leaflet_offers-clone"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dpasch01/leaflet_offers-clone | [
"region:us"
] | 2022-10-14T05:11:21+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5623867.0, "num_examples": 4}], "download_size": 5356712, "dataset_size": 5623867.0}} | 2022-10-14T05:11:34+00:00 |
f3e50ecc00155232eda7815b4a26796130c91bc6 | # Dataset Card for "audio-diffusion-256-isolated-drums"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ndxbxrme/audio-diffusion-256-isolated-drums | [
"region:us"
] | 2022-10-14T06:06:24+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 367170599.374, "num_examples": 8589}], "download_size": 366838959, "dataset_size": 367170599.374}} | 2022-10-14T06:06:35+00:00 |
623e04e36c086b61aa56e426471a64b952c32024 | pandaman2020/SDTraining | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-14T08:14:41+00:00 | {"license": "cc-by-4.0"} | 2023-06-14T05:49:31+00:00 |
|
72659de0f473e99331c92038be331d7c864a7439 | zhenzi/data_process | [
"region:us"
] | 2022-10-14T09:00:16+00:00 | {} | 2022-10-18T01:13:05+00:00 |
|
da31fa7be019faa58aeff0ee22bb93307298a41a | This dataset will be to create my dogs stable-diffusion model | mikelalda/txoko | [
"doi:10.57967/hf/0047",
"region:us"
] | 2022-10-14T10:13:22+00:00 | {} | 2022-10-19T12:30:00+00:00 |
bc167f78800fbaa9da3c7d66e28c3d24f6fd00ee | # AutoTrain Dataset for project: trackerlora_less_data
## Dataset Description
This dataset has been automatically processed by AutoTrain for project trackerlora_less_data.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"id": 444,
"feat_rssi": -113.0,
"feat_snr": -9.25,
"feat_spreading_factor": 7,
"feat_potencia": 14,
"target": 308.0
},
{
"id": 144,
"feat_rssi": -77.0,
"feat_snr": 8.800000190734863,
"feat_spreading_factor": 7,
"feat_potencia": 14,
"target": 126.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"id": "Value(dtype='int64', id=None)",
"feat_rssi": "Value(dtype='float64', id=None)",
"feat_snr": "Value(dtype='float64', id=None)",
"feat_spreading_factor": "Value(dtype='int64', id=None)",
"feat_potencia": "Value(dtype='int64', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 139 |
| valid | 40 |
| pcoloc/autotrain-data-trackerlora_less_data | [
"region:us"
] | 2022-10-14T10:34:20+00:00 | {} | 2022-10-14T11:06:37+00:00 |
678e10f1ea8f5995950f72f9abac070c00759051 | gregkowal/crime-time-game-style | [
"license:other",
"region:us"
] | 2022-10-14T10:48:03+00:00 | {"license": "other"} | 2022-10-14T11:14:15+00:00 |
|
205ca64c78a48e01e0ba211163c89e77c027a4ff |
# cloth
**CLOTH** is a dataset which is a collection of nearly 100,000 cloze questions from middle school and high school English exams. The detail of CLOTH dataset is shown below.
| Number of questions | Train | Valid | Test |
| ------------------- | ----- | ----- | ----- |
| **Middle school** | 22056 | 3273 | 3198 |
| **High school** | 54794 | 7794 | 8318 |
| **Total** | 76850 | 11067 | 11516 |
Source: https://www.cs.cmu.edu/~glai1/data/cloth/ | AndyChiang/cloth | [
"task_categories:fill-mask",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"cloze",
"mid-school",
"high-school",
"exams",
"region:us"
] | 2022-10-14T11:28:41+00:00 | {"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["fill-mask"], "pretty_name": "cloth", "tags": ["cloze", "mid-school", "high-school", "exams"]} | 2022-10-14T13:10:37+00:00 |
830447e72563191bcd52dce78495d7153f02c757 | # wine-ratings
Processing, EDA, and ML on wine ratings | alfredodeza/wine-ratings | [
"region:us"
] | 2022-10-14T11:28:47+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "variety", "dtype": "string"}, {"name": "rating", "dtype": "float32"}, {"name": "notes", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 82422, "num_examples": 200}, {"name": "train", "num_bytes": 13538613, "num_examples": 32780}, {"name": "validation", "num_bytes": 83047, "num_examples": 200}], "download_size": 0, "dataset_size": 13704082}} | 2022-10-15T12:09:06+00:00 |
60582e99b1ebd35b4ba41cf11b19a6aaa87db726 | # Dataset Card for "dummy_swin_pipe_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | FSDL-Fashion/dummy_swin_pipe_5k | [
"region:us"
] | 2022-10-14T11:45:57+00:00 | {"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 20800000, "num_examples": 5000}], "download_size": 21312459, "dataset_size": 20800000}} | 2022-10-14T11:46:02+00:00 |
104c7e6a9c489be3b34bfdb905cf124063473ea7 |
# dgen
**DGen** is a cloze questions dataset which covers multiple domains including science, vocabulary, common sense and trivia. It is compiled from a wide variety of datasets including SciQ, MCQL, AI2 Science Questions, etc. The detail of DGen dataset is shown below.
| DGen dataset | Train | Valid | Test | Total |
| ----------------------- | ----- | ----- | ---- | ----- |
| **Number of questions** | 2321 | 300 | 259 | 2880 |
Source: https://github.com/DRSY/DGen | AndyChiang/dgen | [
"task_categories:fill-mask",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"cloze",
"sciq",
"mcql",
"ai2 science questions",
"region:us"
] | 2022-10-14T11:56:15+00:00 | {"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["fill-mask"], "pretty_name": "dgen", "tags": ["cloze", "sciq", "mcql", "ai2 science questions"]} | 2022-10-14T13:19:16+00:00 |
72eb2ea815e2924593d458534c6d68d5471e5019 | # Dataset Card for "figaro_hair_segmentation_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Allison/figaro_hair_segmentation_1000 | [
"region:us"
] | 2022-10-14T12:27:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 68214218.0, "num_examples": 1000}, {"name": "validation", "num_bytes": 3542245.0, "num_examples": 50}], "download_size": 0, "dataset_size": 71756463.0}} | 2022-10-15T15:28:24+00:00 |
86d7547dd834ab89cc6715b07eb8bef15a8ee9f3 | randomwalksky/cup | [
"license:openrail",
"region:us"
] | 2022-10-14T12:48:10+00:00 | {"license": "openrail"} | 2022-10-14T12:49:09+00:00 |
|
41b0cc22d1bf22ab270d99a902d0e349eb766d8e | # Dataset Card for "dummy_swin_pipe"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | FSDL-Fashion/dummy_swin_pipe | [
"region:us"
] | 2022-10-14T13:29:08+00:00 | {"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 416000000, "num_examples": 100000}], "download_size": 420001566, "dataset_size": 416000000}} | 2022-10-14T13:33:52+00:00 |
4b964f60f7265990c1b72454e48305e460135281 | A few images of Echo | batchku/echo | [
"region:us"
] | 2022-10-14T15:14:13+00:00 | {} | 2022-10-14T16:27:07+00:00 |
80e34a787a6c757d2e9cad051ac26c3353b70225 |
## Message Content Rephrasing Dataset
Introduced by Einolghozati et al. in Sound Natural: Content Rephrasing in Dialog Systems https://aclanthology.org/2020.emnlp-main.414/
We introduce a new task of rephrasing for amore natural virtual assistant. Currently, vir-tual assistants work in the paradigm of intent-slot tagging and the slot values are directlypassed as-is to the execution engine. However,this setup fails in some scenarios such as mes-saging when the query given by the user needsto be changed before repeating it or sending itto another user. For example, for queries like‘ask my wife if she can pick up the kids’ or ‘re-mind me to take my pills’, we need to rephrasethe content to ‘can you pick up the kids’ and‘take your pills’. In this paper, we study theproblem of rephrasing with messaging as ause case and release a dataset of 3000 pairs oforiginal query and rephrased query. We showthat BART, a pre-trained transformers-basedmasked language model with auto-regressivedecoding, is a strong baseline for the task, andshow improvements by adding a copy-pointerand copy loss to it. We analyze different trade-offs of BART-based and LSTM-based seq2seqmodels, and propose a distilled LSTM-basedseq2seq as the best practical model.
| facebook/content_rephrasing | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-14T16:25:22+00:00 | {"license": "cc-by-sa-4.0"} | 2022-10-14T16:41:05+00:00 |
d114b6fff871e11d1bb5835432f461cd3148e452 | # Dataset Card for "Quran_Hadith"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Quran_Hadith | [
"region:us"
] | 2022-10-14T16:45:31+00:00 | {"dataset_info": {"features": [{"name": "SS", "dtype": "string"}, {"name": "SV", "dtype": "string"}, {"name": "Verse1", "dtype": "string"}, {"name": "TS", "dtype": "string"}, {"name": "TV", "dtype": "string"}, {"name": "Verse2", "dtype": "string"}, {"name": "Label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7351452, "num_examples": 8144}], "download_size": 2850963, "dataset_size": 7351452}} | 2022-10-14T16:45:37+00:00 |
6d008011ac5b47dcd75029f46901da81382b6d89 | Paper: https://arxiv.org/abs/2210.12478
---
license: apache-2.0
---
| prajjwal1/discosense | [
"arxiv:2210.12478",
"region:us"
] | 2022-10-14T18:09:30+00:00 | {} | 2023-07-21T10:21:26+00:00 |
c3f6bd8acd77dc0d3f4e8df3961f2f82aedbb7d2 | # Dataset Card for "AlRiyadh_Newspaper_Covid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/AlRiyadh_Newspaper_Covid | [
"region:us"
] | 2022-10-14T18:20:23+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "string"}, {"name": "ID", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Source", "dtype": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Subtitle", "dtype": "string"}, {"name": "Image", "dtype": "string"}, {"name": "Caption", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "FullText", "dtype": "string"}, {"name": "FullTextCleaned", "dtype": "string"}, {"name": "FullTextWords", "dtype": "string"}, {"name": "WordsCounts", "dtype": "string"}, {"name": "Date", "dtype": "string"}, {"name": "Time", "dtype": "string"}, {"name": "Images", "dtype": "string"}, {"name": "Captions", "dtype": "string"}, {"name": "Terms", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 376546224, "num_examples": 24084}], "download_size": 164286254, "dataset_size": 376546224}} | 2022-10-14T18:20:34+00:00 |
3ded52588975a96bbce202da4cdf605278e88274 |
This dataset is created by translating a part of the Stanford QA dataset.
It contains 5k QA pairs from the original SQuad dataset translated to Hindi using the googletrans api. | aneesh-b/SQuAD_Hindi | [
"license:unknown",
"region:us"
] | 2022-10-14T18:20:33+00:00 | {"license": "unknown"} | 2022-10-16T05:18:33+00:00 |
c2c253732cadc497dd41ab0029779f7735060e52 | # Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rick012/celeb-identities | [
"region:us"
] | 2022-10-14T18:32:12+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Cristiano_Ronaldo", "1": "Jay_Z", "2": "Nicki_Minaj", "3": "Peter_Obi", "4": "Roger_Federer", "5": "Serena_Williams"}}}}], "splits": [{"name": "train", "num_bytes": 195536.0, "num_examples": 18}], "download_size": 193243, "dataset_size": 195536.0}} | 2022-10-14T18:48:57+00:00 |
e56902acc46a67a5f18623dd73a38d6685672a3f | # Dataset Card for "BRAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/BRAD | [
"region:us"
] | 2022-10-14T18:38:23+00:00 | {"dataset_info": {"features": [{"name": "review_id", "dtype": "string"}, {"name": "book_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": 1, "1": 2, "2": 3, "3": 4, "4": 5}}}}], "splits": [{"name": "train", "num_bytes": 407433642, "num_examples": 510598}], "download_size": 211213150, "dataset_size": 407433642}} | 2022-10-14T18:38:36+00:00 |
4b2ea7773f47fa46fef6408a38620fd08d19e055 |
# Dataset Card for OpenSLR Nepali Large ASR Cleaned
## Table of Contents
- [Dataset Card for OpenSLR Nepali Large ASR Cleaned](#dataset-card-for-openslr-nepali-large-asr-cleaned)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [How to use?](#how-to-use)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:** [Original OpenSLR Large Nepali ASR Dataset link](https://www.openslr.org/54/)
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Sagar Sapkota](mailto:[email protected])
### Dataset Summary
This data set contains transcribed audio data for Nepali. The data set consists of flac files, and a TSV file. The file utt_spk_text.tsv contains a FileID, anonymized UserID and the transcription of audio in the file.
The data set has been manually quality-checked, but there might still be errors.
The audio files are sampled at a rate of 16KHz, and leading and trailing silences are trimmed using torchaudio's voice activity detection.
For your reference, following was the function applied on each of the original openslr utterances.
```python
import torchaudio
SAMPLING_RATE = 16000
def process_audio_file(orig_path, new_path):
"""Read and process file in `orig_path` and save it to `new_path`"""
waveform, sampling_rate = torchaudio.load(orig_path)
if sampling_rate != SAMPLING_RATE:
waveform = torchaudio.functional.resample(waveform, sampling_rate, SAMPLING_RATE)
# trim end silences with Voice Activity Detection
waveform = torchaudio.functional.vad(waveform, sample_rate=SAMPLING_RATE)
torchaudio.save(new_path, waveform, sample_rate=SAMPLING_RATE)
```
### How to use?
There are two configurations for the data: one to download the original data and the other to download the preprocessed data as described above.
1. First, to download the original dataset with HuggingFace's [Dataset](https://huggingface.co/docs/datasets/) API:
```python
from datasets import load_dataset
dataset = load_dataset("spktsagar/openslr-nepali-asr-cleaned", name="original", split='train')
```
2. To download the preprocessed dataset:
```python
from datasets import load_dataset
dataset = load_dataset("spktsagar/openslr-nepali-asr-cleaned", name="cleaned", split='train')
```
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition.
### Languages
Nepali
## Dataset Structure
### Data Instances
```js
{
'utterance_id': 'e1c4d414df',
'speaker_id': '09da0',
'utterance': {
'path': '/root/.cache/huggingface/datasets/downloads/extracted/e3cf9a618900289ecfd4a65356633d7438317f71c500cbed122960ab908e1e8a/cleaned/asr_nepali/data/e1/e1c4d414df.flac',
'array': array([-0.00192261, -0.00204468, -0.00158691, ..., 0.00323486, 0.00256348, 0.00262451], dtype=float32),
'sampling_rate': 16000
},
'transcription': '२००५ मा बिते',
'num_frames': 42300
}
```
### Data Fields
- utterance_id: a string identifying the utterances
- speaker_id: obfuscated unique id of the speaker whose utterances is in the current instance
- utterance:
- path: path to the utterance .flac file
- array: numpy array of the utterance
- sampling_rate: sample rate of the utterance
- transcription: Nepali text which spoken in the utterance
- num_frames: length of waveform array
### Data Splits
The dataset is not split. The consumer should split it as per their requirements. | spktsagar/openslr-nepali-asr-cleaned | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-14T18:44:31+00:00 | {"license": "cc-by-sa-4.0", "dataset_info": [{"config_name": "original", "features": [{"name": "utterance_id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "utterance", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "num_frames", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40925646, "num_examples": 157905}], "download_size": 9340083067, "dataset_size": 40925646}, {"config_name": "cleaned", "features": [{"name": "utterance_id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "utterance", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "num_frames", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40925646, "num_examples": 157905}], "download_size": 5978669282, "dataset_size": 40925646}]} | 2022-10-23T17:15:15+00:00 |
da93d7ca5f81aaae854ade8bcaf8147a6d0a0cb5 | from datasets import load_dataset
dataset = load_dataset("Ariela/muneca-papel")
| Ariela/muneca-papel | [
"license:unknown",
"region:us"
] | 2022-10-14T18:44:36+00:00 | {"license": "unknown"} | 2022-10-15T18:56:12+00:00 |
c4a17a7a5dbacb594c23e8ff0aafca7250121013 | # Dataset Card for "OSACT4_hatespeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/OSACT4_hatespeech | [
"region:us"
] | 2022-10-14T18:48:30+00:00 | {"dataset_info": {"features": [{"name": "tweet", "dtype": "string"}, {"name": "offensive", "dtype": "string"}, {"name": "hate", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1417732, "num_examples": 6838}, {"name": "validation", "num_bytes": 204725, "num_examples": 999}], "download_size": 802812, "dataset_size": 1622457}} | 2022-10-14T18:48:40+00:00 |
37c7175b2b6f07d4c749f7390ce9784e999aa1d5 | # Dataset Card for "Sentiment_Lexicons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Sentiment_Lexicons | [
"region:us"
] | 2022-10-14T18:56:58+00:00 | {"dataset_info": {"features": [{"name": "Term", "dtype": "string"}, {"name": "bulkwalter", "dtype": "string"}, {"name": "sentiment_score", "dtype": "string"}, {"name": "positive_occurrence_count", "dtype": "string"}, {"name": "negative_occurrence_count", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2039703, "num_examples": 43308}], "download_size": 1068103, "dataset_size": 2039703}} | 2022-10-14T18:57:04+00:00 |
e43dbe88d29779bc0440e214fc4de451d22392bc | ## Córpus de Complexidade Textual para Estágios Escolares do Sistema Educacional Brasileiro
O córpus inclui trechos de: livros-textos cuja lista completa é apresentada abaixo, notícias da Seção Para Seu Filho Ler (PSFL) do jornal Zero Hora que apresenta algumas notícias sobre o mesmo córpus do jornal do Zero Hora, mas escritas para crianças de 8 a 11 anos de idade , Exames do SAEB , Livros Digitais do Wikilivros em Português, Exames do Enem dos anos 2015, 2016 e 2017. Todo o material em português foi disponibilizado para avaliar a tarefa de complexidade textual (readability).
Lista completa dos Livros Didáticos e suas fontes originais
Esse corpus faz parte dos recursos de meu doutorado na área de Natural Language Processing, sendo realizado no Núcleo Interinstitucional de Linguística Computacional da USP de São Carlos. Esse trabalho foi orientado pela Profa. Sandra Maria Aluísio.
http://nilc.icmc.usp.br
@inproceedings{mgazzola19,
title={Predição da Complexidade Textual de Recursos Educacionais Abertos em Português},
author={Murilo Gazzola, Sidney Evaldo Leal, Sandra Maria Aluisio},
booktitle={Proceedings of the Brazilian Symposium in Information and Human Language Technology},
year={2019}
} | tiagoblima/nilc-school-books | [
"license:mit",
"region:us"
] | 2022-10-14T20:09:32+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "level", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1276559.048483246, "num_examples": 8321}, {"name": "train", "num_bytes": 4595060.28364021, "num_examples": 29952}, {"name": "validation", "num_bytes": 510715.6678765444, "num_examples": 3329}], "download_size": 3645953, "dataset_size": 6382335.0}} | 2022-11-13T01:03:20+00:00 |
c2f48f68766a519e06a81cbc405d36dd4762d785 | # Dataset Card for "Commonsense_Validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Commonsense_Validation | [
"region:us"
] | 2022-10-14T20:52:13+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "first_sentence", "dtype": "string"}, {"name": "second_sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": 0, "1": 1}}}}], "splits": [{"name": "train", "num_bytes": 1420233, "num_examples": 10000}, {"name": "validation", "num_bytes": 133986, "num_examples": 1000}], "download_size": 837486, "dataset_size": 1554219}} | 2022-10-14T20:52:21+00:00 |
fed92167f9ae45fac1207017212a0c5bc6da02cd | # Dataset Card for "arastance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/arastance | [
"region:us"
] | 2022-10-14T21:14:14+00:00 | {"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "claim_url", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "stance", "dtype": {"class_label": {"names": {"0": "Discuss", "1": "Disagree", "2": "Unrelated", "3": "Agree"}}}}, {"name": "article_title", "dtype": "string"}, {"name": "article_url", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 5611165, "num_examples": 646}, {"name": "train", "num_bytes": 29682402, "num_examples": 2848}, {"name": "validation", "num_bytes": 7080226, "num_examples": 569}], "download_size": 18033579, "dataset_size": 42373793}} | 2022-10-14T21:14:25+00:00 |
f89f0029a9dd992ff5e43eadde0ac821406d9cbe | # Dataset Card for "TUNIZI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/TUNIZI | [
"region:us"
] | 2022-10-14T21:28:41+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 188084, "num_examples": 2997}], "download_size": 127565, "dataset_size": 188084}} | 2022-10-14T21:28:45+00:00 |
d25e904472d19ac8cb639bff14cd59f31a90991b | # Dataset Card for "AQAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/AQAD | [
"region:us"
] | 2022-10-14T21:35:33+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 23343014, "num_examples": 17911}], "download_size": 3581662, "dataset_size": 23343014}} | 2022-10-14T21:35:38+00:00 |
e9674e9345c66631d1cd1f89ca1f00d8ae119c4f | # Dataset Card for "MArSum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/MArSum | [
"region:us"
] | 2022-10-14T21:42:30+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3332778, "num_examples": 1981}], "download_size": 1743254, "dataset_size": 3332778}} | 2022-10-14T21:42:35+00:00 |
d337fbd0337b6eda3282433826f037770ee94f69 | # Dataset Card for "arabicReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | omerist/arabicReviews-ds-mini | [
"region:us"
] | 2022-10-14T22:25:48+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "content_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11505614.4, "num_examples": 3600}, {"name": "validation", "num_bytes": 1278401.6, "num_examples": 400}], "download_size": 6325726, "dataset_size": 12784016.0}} | 2022-10-14T22:53:38+00:00 |
8068419f931b965fce6f7ee08a2ad07d7397d039 |
# Dataset Card for Dicionário Português
It is a list of portuguese words with its inflections
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/pt-all-words")
remote_dataset
```
| VanessaSchenkel/pt-all-words | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:pt",
"region:us"
] | 2022-10-14T23:52:20+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["pt"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["other", "text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "sbwce", "pretty_name": "Dicion\u00e1rio em Portugu\u00eas", "tags": []} | 2022-10-15T00:59:29+00:00 |
d5c7c07268056a1b294d5815bdf012f92c327c1d | # Dataset Card for "arab-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | omerist/arab-ds-mini | [
"region:us"
] | 2022-10-15T00:12:24+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 87011869.13722204, "num_examples": 27116}, {"name": "validation", "num_bytes": 9668342.001417983, "num_examples": 3013}], "download_size": 49392988, "dataset_size": 96680211.13864002}} | 2022-10-15T00:12:49+00:00 |
042361486f09031154629eff1e6059a609456f5a | randomwalksky/toy | [
"license:openrail",
"region:us"
] | 2022-10-15T02:30:33+00:00 | {"license": "openrail"} | 2022-10-15T02:30:33+00:00 |
|
5dd31b4c66365c698c3e2e92d86b0d11ec6598cc | zhenzi/imagenette | [
"region:us"
] | 2022-10-15T02:39:41+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "config_name": "tests", "splits": [{"name": "train", "num_bytes": 459616258, "num_examples": 10500}], "download_size": 467583804, "dataset_size": 459616258}} | 2022-10-19T02:37:03+00:00 |
|
da9a982d6ee573ec8c72df9e6e78a0d92fa56eb2 | mrajbrahma/bodo-words | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-15T03:55:31+00:00 | {"license": "cc-by-sa-4.0"} | 2022-10-15T03:56:25+00:00 |
|
1974c2c4a875f5da8848ce9adf4821f825352382 | CrispyShark/emoji_hairpin | [
"region:us"
] | 2022-10-15T05:22:22+00:00 | {} | 2022-10-15T13:27:59+00:00 |
|
2eefce06256e84521bdff3e3a0df0248bd28cb27 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jets](https://huggingface.co/Jets) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-ea058a-1765461442 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T05:28:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-15T05:31:40+00:00 |
54b7e788d34f58904c6a02941ca9270f5179db65 | Shushant/PubmedQuestionAnsweringDataset | [
"license:other",
"region:us"
] | 2022-10-15T06:21:53+00:00 | {"license": "other"} | 2022-10-15T06:22:44+00:00 |
|
2ccad53104e75b5ec10f8abc1ac16f4c5f7ea384 |
# Dataset Card for uneune_image1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
今まで私が描いたイラスト100枚のデータセットです。
512×512にトリミングしてあります。
さっくりとstableDiffusionでの学習用に使えるデータセットが欲しかったので作りました。
This is a data set of 100 illustrations I have drawn so far.
Cropped to 512x512.
I wanted a dataset that can be used for learning with stableDiffusion, so I made it. | une/uneune_image1 | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-15T07:41:22+00:00 | {"license": "cc-by-4.0"} | 2022-10-15T08:07:58+00:00 |
7c729d53bec09f9400a0b4ea7fe19d286178d273 | Harsit/xnli2.0_train_french | [
"language:fr",
"region:us"
] | 2022-10-15T08:17:22+00:00 | {"language": ["fr"]} | 2023-10-03T06:37:59+00:00 |
|
3c01cebd3e2d75dbf0987f1bc4c2b424923d733d | language: ["Urdu"] | Harsit/xnli2.0_train_urdu | [
"region:us"
] | 2022-10-15T08:26:47+00:00 | {} | 2022-10-15T08:30:11+00:00 |
d11e6d5bb369ca02a87fd48611f640afa98c7962 | CG80499/Inverse-scaling-test | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"license:bigscience-openrail-m",
"region:us"
] | 2022-10-15T10:47:40+00:00 | {"license": "bigscience-openrail-m", "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification"], "train-eval-index": [{"config": "inverse-scaling-test", "task": "text-generation", "task_id": "text_zero_shot_classification", "splits": {"eval_split": "train"}, "col_mapping": {"prompt": "text", "classes": "classes", "answer_index": "target"}}]} | 2022-10-16T10:33:06+00:00 |
|
3a1b88eba215ea26ae74e6884e793bda02d2442f | siberspace/elisabeth-borne | [
"region:us"
] | 2022-10-15T10:57:43+00:00 | {} | 2022-10-15T10:58:16+00:00 |
|
70cdab03f29a290ff14d21f9f8080286cd72dd86 | siberspace/ricardo | [
"region:us"
] | 2022-10-15T11:25:49+00:00 | {} | 2022-10-15T15:19:40+00:00 |
|
d563042b2a16501be4c7eeb7b71998db3a24adec | # Dataset Card for "turknews-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | omerist/turknews-mini | [
"region:us"
] | 2022-10-15T11:38:03+00:00 | {"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9064933.18105424, "num_examples": 3534}, {"name": "validation", "num_bytes": 1008069.8189457601, "num_examples": 393}], "download_size": 5732599, "dataset_size": 10073003.0}} | 2022-10-15T11:38:10+00:00 |
c15baed0307c4fcc7b375258a182ea49ef2d4e8b | # Dataset Card for "balloon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nielsr/balloon | [
"region:us"
] | 2022-10-15T11:59:06+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 30808803.0, "num_examples": 61}, {"name": "validation", "num_bytes": 8076058.0, "num_examples": 13}], "download_size": 38814125, "dataset_size": 38884861.0}} | 2022-10-15T12:02:05+00:00 |
f6b502b946c723ef3dd51efcbe15f1753cbad6a1 | Fantomas78/Tamburro | [
"region:us"
] | 2022-10-15T12:15:05+00:00 | {} | 2022-10-15T12:15:32+00:00 |
|
b2d765c28484069c071934ac7858b682c4e798e8 | Michaelber123/mike | [
"license:artistic-2.0",
"region:us"
] | 2022-10-15T12:25:52+00:00 | {"license": "artistic-2.0"} | 2022-10-15T12:26:50+00:00 |
|
519c6f85f8dc6cbbf4878ebdb71dd39054c5357d | topia
Sport
topia
Documentaire
topia
Song Of Topia
topia | Sethyyann3572/glue-topia | [
"license:openrail",
"region:us"
] | 2022-10-15T12:31:25+00:00 | {"license": "openrail"} | 2022-10-15T12:32:42+00:00 |
a0bd554a17af724da30bd7b22b77022d9cb67991 | # Dataset Card for "celebrity_in_movie_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | deman539/celebrity_in_movie_demo | [
"region:us"
] | 2022-10-15T12:33:39+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "output"}}}}], "splits": [{"name": "train", "num_bytes": 2237547.0, "num_examples": 5}], "download_size": 1373409, "dataset_size": 2237547.0}} | 2022-10-15T13:50:25+00:00 |
fcd42e249fed48dbd1d3b9b969528ef9298d3464 |
# Allison Parrish's Gutenberg Poetry Corpus
This corpus was originally published under the CC0 license by [Allison Parrish](https://www.decontextualize.com/). Please visit Allison's fantastic [accompanying GitHub repository](https://github.com/aparrish/gutenberg-poetry-corpus) for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.
This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding `gutenberg_id` (1191 unique values) from project Gutenberg.
```python
Dataset({
features: ['line', 'gutenberg_id'],
num_rows: 3085117
})
```
A row of data looks like this:
```python
{'line': 'And retreated, baffled, beaten,', 'gutenberg_id': 19}
```
| biglam/gutenberg-poetry-corpus | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"poetry",
"stylistics",
"poems",
"gutenberg",
"region:us"
] | 2022-10-15T12:42:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Gutenberg Poetry Corpus", "tags": ["poetry", "stylistics", "poems", "gutenberg"]} | 2022-10-18T09:53:52+00:00 |
e078a9a8bb873844031a65f6a0cc198ddcc1c6a5 | ## Dataset Summary
Depth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the [Unsplash 25K](https://github.com/unsplash/datasets) data set.
## Dataset Description
- **Repository:** [https://github.com/sniafas/photography-style-analysis](https://github.com/sniafas/photography-style-analysis)
- **Paper:** [More Information Needed](https://www.researchgate.net/publication/355917312_Photography_Style_Analysis_using_Machine_Learning)
### Citation Information
```
@article{sniafas2021,
title={DoF: An image dataset for depth of field classification},
author={Niafas, Stavros},
doi= {10.13140/RG.2.2.29880.62722},
url= {https://www.researchgate.net/publication/364356051_DoF_depth_of_field_datase}
year={2021}
}
```
Note that each DoF dataset has its own citation. Please see the source to
get the correct citation for each contained dataset. | svnfs/depth-of-field | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"annotations_creators:Stavros Niafas",
"license:apache-2.0",
"region:us"
] | 2022-10-15T12:57:29+00:00 | {"annotations_creators": ["Stavros Niafas"], "license": "apache-2.0", "task_categories": ["image-classification", "image-segmentation"], "sample_number": [1200], "class_number": [2], "image_size": ["(200,300,3)"], "source_dataset": ["unsplash"], "dataset_info": [{"config_name": "depth-of-field", "features": [{"name": "image", "dtype": "string"}, {"name": "class", "dtype": {"class_label": {"names": {"0": "bokeh", "1": "no-bokeh"}}}}]}, {"config_name": "default", "features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 192150, "num_examples": 1200}], "download_size": 38792692, "dataset_size": 192150}]} | 2022-11-13T23:33:39+00:00 |
0eea994c2f3958629e34934373d4b48ccd53c20e | SamHernandez/my-style | [
"license:afl-3.0",
"region:us"
] | 2022-10-15T13:15:28+00:00 | {"license": "afl-3.0"} | 2022-10-15T13:17:13+00:00 |
|
6c9e42b0a14c5b017947313f7098d871fb498b91 | Mbermudez/mike | [
"license:openrail",
"region:us"
] | 2022-10-15T14:43:32+00:00 | {"license": "openrail"} | 2022-10-15T14:43:53+00:00 |
|
75321e3f022839c10b67ba9c08bb6efac8e17aca | # Dataset Card for "clothes_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ghoumrassi/clothes_sample | [
"region:us"
] | 2022-10-15T14:50:15+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20078406.0, "num_examples": 990}], "download_size": 0, "dataset_size": 20078406.0}} | 2022-10-15T17:07:22+00:00 |
b302b4605dd1a192ee9999e260009eadd110fd7d | jaxmetaverse/wukong | [
"license:openrail",
"region:us"
] | 2022-10-15T14:51:53+00:00 | {"license": "openrail"} | 2022-10-16T01:07:16+00:00 |
|
540de892a1be8640934c938b4177e1de14ca3559 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2-xl
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@rololbot](https://huggingface.co/rololbot) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-4df82b-1769161494 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T15:00:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "gpt2-xl", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-15T15:03:51+00:00 |
4516f87f63964a28cae1eda838ebc267388703ea | blancoloureiro/fotos | [
"license:openrail",
"region:us"
] | 2022-10-15T16:13:41+00:00 | {"license": "openrail"} | 2022-10-15T16:14:17+00:00 |
|
c4650f60157ba9efe405db5e3ee243e1bc7d0713 | alexinigoc/AlejandroTraining | [
"license:afl-3.0",
"region:us"
] | 2022-10-15T17:04:27+00:00 | {"license": "afl-3.0"} | 2022-10-15T20:02:59+00:00 |
|
65bb3029428ccce24e597b76531e6af13b389f19 | alexinigoc/DatasetTraining | [
"license:afl-3.0",
"region:us"
] | 2022-10-15T20:03:44+00:00 | {"license": "afl-3.0"} | 2022-10-15T20:04:07+00:00 |
|
efce2cf816cf1abad0c590e9e737e5289e1f9394 | # Dataset Card for "Iraqi_Dialect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Iraqi_Dialect | [
"region:us"
] | 2022-10-15T20:16:56+00:00 | {"dataset_info": {"features": [{"name": "No.", "dtype": "string"}, {"name": " Tex", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "IDK", "2": "N", "3": "True"}}}}], "splits": [{"name": "train", "num_bytes": 365478, "num_examples": 1672}], "download_size": 134999, "dataset_size": 365478}} | 2022-10-15T20:17:07+00:00 |
991d85ba7b296eb212731f44c61e7cc3e1543700 | oscarmutante/oscar | [
"license:unlicense",
"region:us"
] | 2022-10-15T20:27:15+00:00 | {"license": "unlicense"} | 2022-10-15T20:28:32+00:00 |
|
ee7fc57264b8056f8341f8215e5307a680a78f0a | # Dataset Card for "Sudanese_Dialect_Tweet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Sudanese_Dialect_Tweet | [
"region:us"
] | 2022-10-15T20:39:50+00:00 | {"dataset_info": {"features": [{"name": "Tweet", "dtype": "string"}, {"name": "Annotator 1", "dtype": "string"}, {"name": "Annotator 2", "dtype": "string"}, {"name": "Annotator 3", "dtype": "string"}, {"name": "Mode", "dtype": "string"}, {"name": "Date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 345088, "num_examples": 2123}], "download_size": 141675, "dataset_size": 345088}} | 2022-10-15T20:40:01+00:00 |
8e2e32d0832c597e4ba2b1f252e59cec765a8c37 | # Dataset Card for "Sudanese_Dialect_Tweet_Tele"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Sudanese_Dialect_Tweet_Tele | [
"region:us"
] | 2022-10-15T20:47:08+00:00 | {"dataset_info": {"features": [{"name": "Tweet ID", "dtype": "string"}, {"name": "Tweet Text", "dtype": "string"}, {"name": "Date", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NEGATIVE", "1": "POSITIVE", "2": "OBJECTIVE"}}}}], "splits": [{"name": "train", "num_bytes": 872272, "num_examples": 5346}], "download_size": 353611, "dataset_size": 872272}} | 2022-10-15T20:47:19+00:00 |
62ccc10bb5eb840553d8a5bfb7635a8e2597172f | Romecr/testImages | [
"license:other",
"region:us"
] | 2022-10-15T20:48:10+00:00 | {"license": "other"} | 2022-12-29T21:55:23+00:00 |
|
1bf5e6c1c2761f004eb867b20ad5d8a173ace8da | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961515 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T20:52:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-15T20:53:08+00:00 |
8b2593845c16fa3deed61cb75900f4d472fc90f5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961516 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T20:52:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-15T20:53:37+00:00 |
cd13b81d7a5f2a2097052eee7be3652d71c7e698 | # Dataset Card for "cheques_sample_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shivi/cheques_sample_data | [
"region:us"
] | 2022-10-15T21:25:47+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 7518544.0, "num_examples": 400}, {"name": "train", "num_bytes": 56481039.4, "num_examples": 2800}, {"name": "validation", "num_bytes": 15034990.0, "num_examples": 800}], "download_size": 58863727, "dataset_size": 79034573.4}} | 2022-11-05T21:31:01+00:00 |
c14be6279b7e817d010409aaad46df114f0af3f5 | # Dataset Card for "Satirical_Fake_News"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Satirical_Fake_News | [
"region:us"
] | 2022-10-15T21:37:45+00:00 | {"dataset_info": {"features": [{"name": "Text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6131349, "num_examples": 3221}], "download_size": 3223892, "dataset_size": 6131349}} | 2022-10-15T21:37:57+00:00 |
4be22018d039ee657dbeb7ff2e62fc9ae8eefdb6 | # Dataset Card for "NArabizi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/NArabizi | [
"region:us"
] | 2022-10-15T21:47:54+00:00 | {"dataset_info": {"features": [{"name": "ID", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NEU", "1": "NEG", "2": "MIX", "3": "POS"}}}}], "splits": [{"name": "test", "num_bytes": 4034, "num_examples": 144}, {"name": "train", "num_bytes": 27839, "num_examples": 998}, {"name": "validation", "num_bytes": 3823, "num_examples": 137}], "download_size": 12217, "dataset_size": 35696}} | 2022-10-15T21:48:18+00:00 |
619c18ba46019c28099c82a430e773e98471b5db | # Dataset Card for "ArSAS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/ArSAS | [
"region:us"
] | 2022-10-15T21:51:23+00:00 | {"dataset_info": {"features": [{"name": "#Tweet_ID", "dtype": "string"}, {"name": "Tweet_text", "dtype": "string"}, {"name": "Topic", "dtype": "string"}, {"name": "Sentiment_label_confidence", "dtype": "string"}, {"name": "Speech_act_label", "dtype": "string"}, {"name": "Speech_act_label_confidence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Negative", "1": "Neutral", "2": "Positive", "3": "Mixed"}}}}], "splits": [{"name": "train", "num_bytes": 6147723, "num_examples": 19897}], "download_size": 2998319, "dataset_size": 6147723}} | 2022-10-15T21:51:35+00:00 |
4da955d842c7487009e4db5c48f02da09a7d2057 | Alfitauwu/Ejemplo | [
"region:us"
] | 2022-10-15T22:35:39+00:00 | {} | 2022-10-15T22:39:15+00:00 |
|
2d787d3f9d73323bcafa04c7fd3edb791aff5589 | Alfitauwu/Pruebitaaaxd | [
"license:openrail",
"region:us"
] | 2022-10-15T22:48:14+00:00 | {"license": "openrail"} | 2022-10-15T22:48:35+00:00 |
|
30f442e1ec9c22dd717f6eaa4ca9f3c146e7eea8 | PonBonPepega/Aia | [
"license:other",
"region:us"
] | 2022-10-15T23:41:49+00:00 | {"license": "other"} | 2022-10-15T23:41:49+00:00 |
|
0281194d215c73170d30add87e5f16f9dec1d641 |
# Dataset Card for OLM September/October 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the September/October 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. | olm/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:en",
"pretraining",
"language modelling",
"common crawl",
"web",
"region:us"
] | 2022-10-16T02:32:35+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM September/October 2022 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]} | 2022-11-04T17:14:25+00:00 |
3cba5a6b651b0ec3ad8ecef4efa9906f5b764a7f | seraldu/sergio_prueba | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-10-16T07:02:00+00:00 | {"license": "bigscience-openrail-m"} | 2022-10-16T07:02:35+00:00 |
|
4f2dc2ad903dd9e297a4169a7fb54c4492af8a22 | ohtaras/Kn | [
"license:unknown",
"region:us"
] | 2022-10-16T09:19:18+00:00 | {"license": "unknown"} | 2022-10-16T09:19:18+00:00 |
|
78f73995b25140373869016fcd809fbd710b4c9c | akashrai/dreambooth_image_training | [
"license:unknown",
"region:us"
] | 2022-10-16T09:42:32+00:00 | {"license": "unknown"} | 2022-10-16T09:43:49+00:00 |
|
02bbbd4aedd6c9809d7c4527bb5d9f3fb6fefbdc | siberspace/carton | [
"region:us"
] | 2022-10-16T09:42:58+00:00 | {} | 2022-10-16T09:43:33+00:00 |
|
6b01b9b18c3b40be4aac81fac9952fd37ca2e4dc | poojaruhal/NLBSE-class-comment-classification | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-16T10:24:39+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-10-16T10:24:39+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.